Files
Final-Year-Project/docs/FOREGROUND_WEB_MOTION_DETECTION_IMPLEMENTATION_PLAN.md

9.3 KiB

Foreground Web Motion Detection Implementation Plan

Summary

Implement automatic motion detection for the camera role in the web app while the browser remains open in the foreground on a plugged-in phone. Reuse the existing backend motion event, notification, recording, and stream flows instead of introducing a new backend motion pipeline.

Goals

  • Detect motion automatically from the local camera preview in the web app.
  • Trigger the existing motion event lifecycle with minimal backend change.
  • Minimize battery and heat by using low-resolution frame analysis and adaptive sampling.
  • Keep the first release heuristic and deterministic rather than ML-based.
  • Make behavior observable and tunable from the camera dashboard.

Constraints

Assumptions

  • Manual motion controls remain available as an operator override.
  • triggeredBy can distinguish manual versus automatic events without a schema change.
  • Persisted per-device motion settings are desirable, but an in-memory first pass is acceptable.
  • The existing recording behavior tied to stream/motion events remains the intended user experience.

Non-Goals

  • True background detection while the browser tab is hidden.
  • ML person/pet/vehicle classification in the first release.
  • Replacing the current notification or recording architecture.
  • Solving appliance-grade uptime guarantees.

Existing Integration Points

Strategy

Phase 1: Add Motion Detector State And Controls

Add camera-side detector state to the Svelte store:

  • motionDetectionEnabled
  • motionDetectorStatus such as idle, warming_up, monitoring, triggered, cooldown
  • motionSensitivity
  • motionSampleIntervalMs
  • motionTriggerConsecutiveFrames
  • motionQuietCooldownMs
  • motionMinimumEventMs
  • motionScore
  • motionDebugEnabled

Expose controls on the camera dashboard:

  • Arm/disarm automatic detection
  • Sensitivity slider or preset selector
  • Low-power mode preset
  • Live debug score and current detector state

Keep the existing manual Simulate Motion Event and Stop Recording actions for fallback.

Phase 2: Build A Lightweight Detector Engine

Create a dedicated detector module, for example WebApp/src/lib/app/motion-detector.js, owned by the camera dashboard flow.

Detector design:

  • Read from the existing localCameraStream
  • Draw frames into an offscreen or hidden canvas
  • Downsample aggressively to about 160x90 or 192x108
  • Convert to grayscale
  • Compare the current frame against the previous smoothed frame
  • Compute a normalized motion score such as changed-pixel ratio or block-delta score
  • Ignore tiny isolated noise with thresholding and optional block aggregation

Battery and heat controls:

  • Default sampling at 1 fps
  • Burst to 4-6 fps only after suspicious motion begins
  • Return to low sampling after cooldown
  • Skip work if preview is not ready, detector is disarmed, or document visibility changes
  • Avoid full-resolution processing and avoid network uploads during detection itself

Phase 3: Add Event State Machine And Backend Reuse

Implement a camera-side state machine:

  • idle -> monitoring
  • monitoring -> candidate_motion
  • candidate_motion -> triggered
  • triggered -> cooldown
  • cooldown -> monitoring

Trigger rules:

  • Require N consecutive high-motion frames before starting an event
  • Call the existing backend motion start endpoint once
  • Set triggeredBy to auto_motion
  • Hold the event open for at least motionMinimumEventMs
  • Only end after the score stays below threshold for motionQuietCooldownMs

This keeps backend changes minimal because the existing event lifecycle already fans out realtime alerts and push notifications.

Phase 4: Make Recording Behavior Predictable

The detector should not record constantly.

Recommended behavior:

  • Motion detection itself only analyzes low-res frames locally
  • When automatic motion is confirmed, call the existing start-motion flow
  • Continue using the current recording logic already associated with motion and streaming
  • End the motion event only after quiet cooldown, not on every instantaneous dip

This avoids repeated start/stop loops that waste device resources.

Phase 5: Add Persistence For Operator Settings

Initial implementation can use local storage on the web app for speed.

Second step:

  • Add motion settings persistence per camera device
  • Store settings either in devices.metadata if introduced later, or in a new device_motion_settings table
  • Load persisted settings on device registration or camera dashboard open

Suggested settings:

  • Enabled/armed
  • Sensitivity
  • Sample interval
  • Quiet cooldown
  • Minimum event duration
  • Optional region of interest

Phase 6: Observability And Debugging

Add operator-visible debug surfaces:

  • Current motion score
  • Detector state
  • Last trigger time
  • Count of suppressed candidate triggers

Add activity log entries for:

  • Detector armed/disarmed
  • Detector warmed up
  • Automatic motion started
  • Automatic motion ended
  • Detector paused because preview or socket is unavailable

Phase 7: Test And Tune

Testing should cover:

  • Low-motion idle scenes
  • Moderate lighting flicker
  • Real person entry into frame
  • Camera shake false positives
  • Reconnection behavior
  • Event deduplication

Tuning targets:

  • Low false positive rate in static indoor scenes
  • Trigger latency below about 2 seconds
  • CPU usage low enough to avoid obvious thermal throttling during foreground operation

File-Level Change Plan

Primary files:

Likely new files:

  • WebApp/src/lib/app/motion-detector.js
  • WebApp/src/lib/app/motion-detector.test.js or equivalent

Optional later backend files:

Risk Controls

  • Use hysteresis so one threshold starts motion and a lower threshold ends it.
  • Require consecutive-frame confirmation before starting events.
  • Pause detection when preview, permission, or socket connectivity is unavailable.
  • Keep all frame processing local and low resolution.
  • Keep manual controls available during rollout.
  • Ship with debug mode so threshold tuning is possible without code changes.
  • Start with the Balanced profile on a plugged-in phone.
  • Use Low Power if the phone runs warm or the scene is mostly static.
  • Keep the browser tab visible and the camera dashboard open while detection is armed.
  • Leave debug mode off during normal operation and enable it only while tuning thresholds.
  • Prefer a stable camera mount and a consistent indoor lighting setup to reduce false positives.

Acceptance Criteria

  • A camera-role web device can arm automatic motion detection from the camera dashboard.
  • When visible motion enters frame, the web app starts one backend motion event without duplicate starts.
  • Linked clients receive the same notifications they currently receive for manual motion events.
  • Motion events remain open through continuous motion and close only after quiet cooldown.
  • The detector does not continuously upload frames or full video for analysis.
  • Manual motion controls continue to work.
  • Detector state survives normal page usage and fails safely on disconnect or permission loss.
  1. Add store state and camera dashboard controls.
  2. Add local detector engine with score reporting only, no event triggering.
  3. Tune thresholds against manual test scenes.
  4. Wire score transitions to automatic event start/end.
  5. Add persistence for detector settings.
  6. Add tests, docs, and rollout notes.