SynoAI
A Synology Surveillance Station notification system utilising DeepStack AI
Install / Use
/learn @djdd87/SynoAIREADME
Archived (2025-04-11)
Due to a lack of time and other commitments, I've decided to archive this project. Personally I'm now using Frigate with a cheap £60 Google Coral device - it's far better than this solution.
SynoAI
A Synology Surveillance Station notification system utilising DeepStack AI, inspired by Christopher Adams' sssAI implementation.
The aim of the solution is to reduce the noise generated by Synology Surveillance Station's motion detection by routing all motion events via a Deepstack docker image to look for particular objects, e.g. people.
While sssAI is a great solution, it is hamstrung by the Synology notification system to send motion alerts. Due to the delay between fetching the snapshot, processing the image using the AI and requesting the alert, it means that the image attached to the Synology notification is sometimes 5-10 seconds after the motion alert was originally triggered.
SynoAI aims to solve this problem by side-stepping the Synology notifications entirely by allowing other notification systems to be used.
Buy Me A Coffee! :coffee:
I made this application mostly for myself in order to improve upon Christopher Adams' original idea and don't expect anything in return. However, if you find it useful and would like to buy me a coffee, feel free to do it at Buy me a coffee! :coffee:. This is entirely optional, but would be appreciated! Or even better, help supported this project by contributing changes such as expanding the supported notification systems (or even AIs).
Versioning
The documentation you see here corresponds to the branch/tag selected above. The documentation will be accurate for the version tag you have selected, however if viewing the main branch, this is assumed to be the documentation that corresponds to the latest commit and latest image.
For example, if you are using the docker image/version v1.1.0, then ensure you have selected the tag for v1.1.0, otherwise you may see features or options which are not available on your version of SynoAI.
Table of Contents
- Features
- Config
- Support AIs
- Notifications
- Caveats
- Configuration
- Updating
- Docker
- Example appsettings.json
- Problems/Debugging
- FAQ
Features
- Triggered via an Action Rule from Synology Surveillance Station
- Works using the camera name and requires no technical knowledge of the Surveillance Station API in order to retrieve the unique camera ID
- Uses an AI for object/person detection
- Produces an output image with highlighted objects using the original image at the point of motion detection
- Sends notification(s) at the point of notification with the processed image attached
- The AI does not need to run on the Synology box and can be run on an another server.
Config
An example appsettings.json configuration file can be found here and all configuration for notifications and AI can be found under their respective sections. The following are the top level configs for communication with Synology Surveillance Station:
General Config
- Url [required]: The URL and port of your NAS, e.g. http://{IP}:{Port}
- User [required]: The user that will be used to request API snapshots
- Password [required]: The password of the user above
- AllowInsecureUrl [optional] (Default
false): Whether to allow an insecure HTTPS connection to the Synology API - SynoAIUrl [optional]: The URL that SynoAI is reachable at. E.g. Used to provide URLs to captures in some notifications.
- Cameras [required]: An array of camera objects - see Camera Config
- Notifiers [required]: See notifications
- Quality [optional] (Default:
Balanced): The quality, aka "profile type" to use when taking a snapshot. This will be based upon the settings of the streams you have configured in Surveillance Station. i.e. if your low, balanced and high streams have the same settings in Surveillance Station, then this setting will make no difference. But if you have a high quality 4k stream, a balance 1080p stream and a low 720p stream, then setting to high will return and process a 4k image. Note that the higher quality the snapshot, the longer the notification will take. Additionally, the larger the image, the smaller your detected objects may be, so ensure you set the MinSizeX/MinSizeY values respectively.High: Takes the snapshot using the profile type "High quality"Balanced: Takes the snapshot using the profile type "Balanced"Low: Takes the snapshot using the profile type "Low bandwidth"
- MinSizeX [optional] (Default:
50): The minimum size in pixels that the object must be to trigger a change (will be ignored if specified on the Camera) - MinSizeY [optional] (Default:
50): The minimum size in pixels that the object must be to trigger a change (will be ignored if specified on the Camera). - Delay [optional] (Default:
5000): The period of time in milliseconds (ms) that must occur between the end of the last motion detection of camera and the next time it'll be processed.- i.e. if your delay is set to 5000 and your camera reports motion 4 seconds after SynoAI finished processing the previous request, then the check will be ignored.
- However, if the report from Surveillance Station is more than 5000ms, then the cameras image will be processed.
- DelayAfterSuccess [optional] (Default:
NULL): The period of time in milliseconds (ms) that must occur between the end of the last motion detection of camera, which resulted in a successful detection and notification being sent, and the next time it'll be processed.- i.e. if your delay is set to 5000 and your camera reports motion 4 seconds after SynoAI finished processing the previous request, then the check will be ignored.
- However, if the report from Surveillance Station is more than 5000ms, then the cameras image will be processed. If this value isn't specified, then
Delaywill be used.
- MaxSnapshots [optional] (Default:
1): Upon movement, the maximum number of snapshots sequentially retrieved from SSS until finding an object of interest. e.g. if 4 is specified, then SynoAI will make a maximum of 4 requests until it finds a type of interest. If a matching type is found on the 1st snapshot, then no further snapshots will be taken. If nothing is found within the 4 requests, then no further snapshots will be taken until the next time Surveillance Station detects motion - DrawMode [optional] (Default:
Matches): Whether to draw all predictions from the AI on the capture image:Matches: Will draw boundary boxes over any object/person that matches the types defined on the camerasAll: Will draw boundary boxes over any object/person that the AI detectedOff: Will not draw boundary boxes (note - this will speed up time between detection and notification as SynoAI will not have to manipulate the image)
- DrawExclusions [optional] (Default:
false): Whether to draw the exclusion zone boundary boxes on the image. Useful for setting up the initial exclusion zones - BoxColor [optional] (Default:
#FF0000): The colour of the border of the boundary box - TextBoxColor [opional] (Default:
#00FFFFFFakatransparent): The colour of the box to draw behind the text to aid in making text more visible - ExclusionBoxColour [optional] (Default:
#00FF00): The colour of the border of the exclusion boundary box - StrokeWidth [optional] (Default:
2): The width, in pixels, of the border around the boundary box - Font [optional] (Default:
Tahoma): The font to use when labelling the boundary boxes on the output image - FontSize [optional] (Default:
12): The size of the font to use (in pixels) when labelling the boundary boxes on the output image - FontColor [optional] (Default:
#00FF00akagreen): The colour of the text for the labels when labelling the boundary boxes on the output image - TextOffsetX [optional] (Default:
2) : The number of pixels to offset the label from the left of the inside of the boundary image on the output image - TextOffsetY [optional] (Default:
2) : The number of pixels to offset the label from the top of the inside of the boundary image on the output image - SaveOriginalSnapshot [optional] (Default:
Off): A mode determining whether to save the source snapshot that was captured from the API before it was sent to and processed by the AI:Off: Will never save the original snapshotAlways: Will save every single snapshot every time motion is detectedWithPredictions: Will save the snapshot if the AI makes one or more predictions (note that this will include predictions which aren't valid)WithValidPredictions: Will sa
Related Skills
node-connect
334.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
82.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
334.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
82.1kCommit, push, and open a PR
