9.3 KiB
Foreground Web Motion Detection Implementation Plan
Summary
Implement automatic motion detection for the camera role in the web app while the browser remains open in the foreground on a plugged-in phone. Reuse the existing backend motion event, notification, recording, and stream flows instead of introducing a new backend motion pipeline.
Goals
- Detect motion automatically from the local camera preview in the web app.
- Trigger the existing motion event lifecycle with minimal backend change.
- Minimize battery and heat by using low-resolution frame analysis and adaptive sampling.
- Keep the first release heuristic and deterministic rather than ML-based.
- Make behavior observable and tunable from the camera dashboard.
Constraints
- The browser tab is assumed to remain visible in the foreground.
- The initial target is the web app camera dashboard, not the Expo mobile app.
- Existing backend endpoints remain the source of truth for motion events.
- The current camera role UX in WebApp/src/routes/camera/+page.svelte must continue to work.
- The current motion event routes in Backend/routes/events.ts and Backend/routes/events.ts should be reused.
Assumptions
- Manual motion controls remain available as an operator override.
triggeredBycan distinguish manual versus automatic events without a schema change.- Persisted per-device motion settings are desirable, but an in-memory first pass is acceptable.
- The existing recording behavior tied to stream/motion events remains the intended user experience.
Non-Goals
- True background detection while the browser tab is hidden.
- ML person/pet/vehicle classification in the first release.
- Replacing the current notification or recording architecture.
- Solving appliance-grade uptime guarantees.
Existing Integration Points
- Camera preview and camera dashboard actions are already hosted in WebApp/src/routes/camera/+page.svelte.
- Local camera capture and preview lifecycle already exist in WebApp/src/lib/app/controller.js.
- Manual motion start/end already call the backend in WebApp/src/lib/app/controller.js.
- Camera-side state is stored in WebApp/src/lib/app/store.js.
Strategy
Phase 1: Add Motion Detector State And Controls
Add camera-side detector state to the Svelte store:
motionDetectionEnabledmotionDetectorStatussuch asidle,warming_up,monitoring,triggered,cooldownmotionSensitivitymotionSampleIntervalMsmotionTriggerConsecutiveFramesmotionQuietCooldownMsmotionMinimumEventMsmotionScoremotionDebugEnabled
Expose controls on the camera dashboard:
- Arm/disarm automatic detection
- Sensitivity slider or preset selector
- Low-power mode preset
- Live debug score and current detector state
Keep the existing manual Simulate Motion Event and Stop Recording actions for fallback.
Phase 2: Build A Lightweight Detector Engine
Create a dedicated detector module, for example WebApp/src/lib/app/motion-detector.js, owned by the camera dashboard flow.
Detector design:
- Read from the existing
localCameraStream - Draw frames into an offscreen or hidden canvas
- Downsample aggressively to about
160x90or192x108 - Convert to grayscale
- Compare the current frame against the previous smoothed frame
- Compute a normalized motion score such as changed-pixel ratio or block-delta score
- Ignore tiny isolated noise with thresholding and optional block aggregation
Battery and heat controls:
- Default sampling at
1 fps - Burst to
4-6 fpsonly after suspicious motion begins - Return to low sampling after cooldown
- Skip work if preview is not ready, detector is disarmed, or document visibility changes
- Avoid full-resolution processing and avoid network uploads during detection itself
Phase 3: Add Event State Machine And Backend Reuse
Implement a camera-side state machine:
idle->monitoringmonitoring->candidate_motioncandidate_motion->triggeredtriggered->cooldowncooldown->monitoring
Trigger rules:
- Require
Nconsecutive high-motion frames before starting an event - Call the existing backend motion start endpoint once
- Set
triggeredBytoauto_motion - Hold the event open for at least
motionMinimumEventMs - Only end after the score stays below threshold for
motionQuietCooldownMs
This keeps backend changes minimal because the existing event lifecycle already fans out realtime alerts and push notifications.
Phase 4: Make Recording Behavior Predictable
The detector should not record constantly.
Recommended behavior:
- Motion detection itself only analyzes low-res frames locally
- When automatic motion is confirmed, call the existing start-motion flow
- Continue using the current recording logic already associated with motion and streaming
- End the motion event only after quiet cooldown, not on every instantaneous dip
This avoids repeated start/stop loops that waste device resources.
Phase 5: Add Persistence For Operator Settings
Initial implementation can use local storage on the web app for speed.
Second step:
- Add motion settings persistence per camera device
- Store settings either in
devices.metadataif introduced later, or in a newdevice_motion_settingstable - Load persisted settings on device registration or camera dashboard open
Suggested settings:
- Enabled/armed
- Sensitivity
- Sample interval
- Quiet cooldown
- Minimum event duration
- Optional region of interest
Phase 6: Observability And Debugging
Add operator-visible debug surfaces:
- Current motion score
- Detector state
- Last trigger time
- Count of suppressed candidate triggers
Add activity log entries for:
- Detector armed/disarmed
- Detector warmed up
- Automatic motion started
- Automatic motion ended
- Detector paused because preview or socket is unavailable
Phase 7: Test And Tune
Testing should cover:
- Low-motion idle scenes
- Moderate lighting flicker
- Real person entry into frame
- Camera shake false positives
- Reconnection behavior
- Event deduplication
Tuning targets:
- Low false positive rate in static indoor scenes
- Trigger latency below about
2 seconds - CPU usage low enough to avoid obvious thermal throttling during foreground operation
File-Level Change Plan
Primary files:
Likely new files:
WebApp/src/lib/app/motion-detector.jsWebApp/src/lib/app/motion-detector.test.jsor equivalent
Optional later backend files:
Risk Controls
- Use hysteresis so one threshold starts motion and a lower threshold ends it.
- Require consecutive-frame confirmation before starting events.
- Pause detection when preview, permission, or socket connectivity is unavailable.
- Keep all frame processing local and low resolution.
- Keep manual controls available during rollout.
- Ship with debug mode so threshold tuning is possible without code changes.
Recommended Operator Settings
- Start with the
Balancedprofile on a plugged-in phone. - Use
Low Powerif the phone runs warm or the scene is mostly static. - Keep the browser tab visible and the camera dashboard open while detection is armed.
- Leave debug mode off during normal operation and enable it only while tuning thresholds.
- Prefer a stable camera mount and a consistent indoor lighting setup to reduce false positives.
Acceptance Criteria
- A camera-role web device can arm automatic motion detection from the camera dashboard.
- When visible motion enters frame, the web app starts one backend motion event without duplicate starts.
- Linked clients receive the same notifications they currently receive for manual motion events.
- Motion events remain open through continuous motion and close only after quiet cooldown.
- The detector does not continuously upload frames or full video for analysis.
- Manual motion controls continue to work.
- Detector state survives normal page usage and fails safely on disconnect or permission loss.
Recommended Delivery Order
- Add store state and camera dashboard controls.
- Add local detector engine with score reporting only, no event triggering.
- Tune thresholds against manual test scenes.
- Wire score transitions to automatic event start/end.
- Add persistence for detector settings.
- Add tests, docs, and rollout notes.