v2.0.0 - Viseron rewrite
This release features a massive rewrite that I have been working on for the past year.
It focuses on decoupling all parts of Viseron, making it more modularized. This allows for easier integration of new functionality.
The config.yaml file will change completely so there is some work that has to be done by each user to port over to the new version.
The general config format is a component
which implements one or more domains
.
Each camera has a unique camera identifier
which flows through the entire configuration.
The nvr
component then ties all these different domains together and provides the full functionality.
The big benefit of this new format is that you can mix and match components more freely.
For instance, you could use different object detectors for different cameras, you are not tied into just one.
Config example
ffmpeg: # <-- component
camera: # <-- domain
camera_one: # <-- camera identifier
name: Camera 1
host: 192.168.10.10
username: test
password: test
camera_two: # <-- camera identifier
name: Camera 2
host: 192.168.10.11
username: test
password: test
....
darknet: # <-- component
object_detector: # <-- domain
model: /my_custom_model/model.weights
cameras:
camera_one: # <-- camera identifier
fps: 5
deepstack: # <-- component
host: deepstack # <-- component config option
port: 5000 # <-- component config option
object_detector: # <-- domain
cameras:
camera_two: # <-- camera identifier
fps: 1
labels:
- label: person
confidence: 0.75
trigger_recorder: false
face_recognition: # <-- domain
cameras:
camera_one:
camera_two:
labels:
- person
background_subtractor: # <-- component
motion_detector: # <-- domain
cameras:
camera_one: # <-- camera identifier
fps: 1
mask:
- coordinates:
- x: 400
y: 200
- x: 1000
y: 200
- x: 1000
y: 750
- x: 400
y: 750
nvr: # <-- component
camera_one: # <-- camera identifier
camera_two: # <-- camera identifier
Config example with publicly available cameras (used in the screenshots)
ffmpeg:
camera:
viseron_camera:
name: Camera 1
host: 195.196.36.242
path: /mjpg/video.mjpg
port: 80
stream_format: mjpeg
fps: 6
recorder:
idle_timeout: 1
codec: h264
viseron_camera2:
name: Camera 2
host: storatorg.halmstad.se
path: /mjpg/video.mjpg
stream_format: mjpeg
port: 443
fps: 2
protocol: https
recorder:
idle_timeout: 1
codec: h264
viseron_camera3:
name: Camera 3
host: 195.196.36.242
path: /mjpg/video.mjpg
port: 80
stream_format: mjpeg
fps: 6
recorder:
idle_timeout: 1
codec: h264
mog2:
motion_detector:
cameras:
viseron_camera:
fps: 1
viseron_camera2:
fps: 1
background_subtractor:
motion_detector:
cameras:
viseron_camera3:
fps: 1
mask:
- coordinates:
- x: 400
y: 200
- x: 1000
y: 200
- x: 1000
y: 750
- x: 400
y: 750
darknet:
object_detector:
cameras:
viseron_camera:
fps: 1
scan_on_motion_only: false
labels:
- label: person
confidence: 0.8
trigger_recorder: true
viseron_camera2:
fps: 1
labels:
- label: person
confidence: 0.8
trigger_recorder: true
viseron_camera3:
fps: 1
labels:
- label: person
confidence: 0.8
trigger_recorder: true
nvr:
viseron_camera:
viseron_camera2:
viseron_camera3:
webserver:
mqtt:
broker: mqtt_broker.lan
port: 1883
username: !secret mqtt_username
password: !secret mqtt_password
home_assistant:
logger:
default_level: debug
Frontend
There is now a UI included in Viseron, written in React TypeScript.
It is enabled by default and can be reached on port 8888 inside the container by default.
Currently it features functionality like viewing cameras, recordings and editing the configuration.
It will be expanded upon in the future.
Jetson Nano support
This release also features better Jetson Nano support through the gstreamer
component.
This means that you can utilize your Nano for hardware accelerated camera decoding and object detection.
CompreFace face recognition
Face recognition using CompreFace is now available
Image classification
A new post processor is available, image_classification
.
This allows you to get more granular results from a detected object.
As an example, you could detect a specific kind of dog breed, or a specific kind of bird.
Multiprocessing
darknet
and edgetpu
object detectos utilize multiprocessing which spreads the load between multiple processes.
This allows for better performance, but also makes it easier to analyze and fine tune the load on your system.
Documentation site
The documentation has moved over from being in a README format to a dedicated site.
Hopefully it will help you use Viseron better.
I need your help on how it can be improved, writing good docs is always hard.
I am especially proud of the components page, which is mostly generated from the config schema used.
This means that the documentation and schema will always match, yay!
The documentation is hosted here: https://viseron.netlify.app/
Breaking changes
-
ALL kinds of inheritance in the config has been removed.
This means that you have to explicitly configure your object detector and motion detector settings for eachcamera
. -
interval
has been removed fromobject_detection
andmotion_detection
A new config optionfps
will be used instead.
This change was made since it was quite confusing, both in the code and for the users becauseinterval
was specified in seconds. -
logging
has been removed in all shapes and forms and has been replaced with the new componentlogger
Please see the updated documentation -
cameras
config section has been removed. Camera config is now specified under a component. -
Each object detector has been split up into individual components.
See the documentation for each detector. -
Each motion detector has been split up into individual components.
See the documentation for each detector. -
recorder
can no longer be configured on a global level.
It now has to be present under each camera configuration. -
timeout
under recorder is now calledidle_timeout
-
static_mjpeg_streams
are now calledmjpeg_streams
-
enable
underobject_detection
is no more. To disable object detection you simply dont configure it for a camera
Same goes formotion_detector
-
timeout
formotion_detector
is now calledrecorder_keepalive
-
max_timeout
formotion_detector
is now calledmax_recorder_keepalive
-
Recordings are now stored in the folder structure
/recordings/<camera name>/<date>/<timestamp>.mp4
-
Images are no longer streamed to MQTT/Home Assistant, with the exception of thumbnails.
This is because MQTT is not really suited for this and it impacts performance a lot.
If you still want to have the cameras in Home Assistant you can point Home Assistant to an MJPEG stream in Viseron. -
filter_args
removed for camera and recorder.
Forcamera
, usevideo_filters
instead.
Forrecorder
, you can use bothvideo_filters
andaudio_filters
Short config example to rotate video 180 degrees:
ffmpeg: camera: camera_1: .... video_filters: # These filters rotate the images processed by Viseron - transpose=2 - transpose=2 recorder: video_filters: # These filters rotate the recorded video - transpose=2 - transpose=2
What's Changed
- remove interval and replace with fps by @roflcoopter in #287
- Update 50-check-if-rpi to support Rpi 4 8gb by @DevelopIdeas in #324
- Add newline after jpgboundary by @olekenneth in #355
- Add new line to jpg boundary refactoring by @olekenneth in #356
- Viseron v2 by @roflcoopter in #306
- Feature/remove dead code by @roflcoopter in #367
- fix automatic pipeline trigger error after rename by @roflcoopter in #368
- fix jetson nano crossbuilds after balenalib breaking change by @roflcoopter in #369
- make README more slim by @roflcoopter in #370
- fix broken link to contributing guidelines by @roflcoopter in #371
- update tagline by @roflcoopter in #372
- add back accidentally removed code by @roflcoopter in #373
- Feature/restart from UI by @roflcoopter in #375
- docs: add redirects to fix 404 by @roflcoopter in #377
- index pages as well for easier search by @roflcoopter in #378
- scale the non-padded axis by @roflcoopter in #379
- make sure user abc is always in plugdev group by @roflcoopter in #380
- update dlib to 19.24 and set compute capabilities by @roflcoopter in #381
- Fix error that was thrown if a domain was setup with multiple missing deps by @roflcoopter in #382
- add compreface component by @roflcoopter in #384
- add websocket ping for more stable connections by @roflcoopter in #385
- Feature/warn on missing nvr by @roflcoopter in #386
- add yolov7 models by @roflcoopter in #387
- Feature/frontend sidebar by @roflcoopter in #388
- use multistage builds for models by @roflcoopter in #395
- Feature/entities page by @roflcoopter in #397
- Feature/yolov7 fix by @roflcoopter in #398
- fix exception when no cameras are configured by @roflcoopter in #399
- perform preprocessing for quantized classifier models by @roflcoopter in #400
- add list of objects to object detector binary sensors by @roflcoopter in #404
- install libpng-dev in jetson nano dlib and base by @roflcoopter in #405
- return zero for negative coordinates by @roflcoopter in #407
- install more VAAPI drivers by @roflcoopter in #410
- use similarity threshold to find unknown faces by @roflcoopter in #412
- improve accuracy of image classifier by @roflcoopter in #413
- include qsv decoders/encoders by @roflcoopter in #414
- make chown silent by @roflcoopter in #415
- add attributes to recorder entities by @roflcoopter in #417
- make sure segments are not deleted during concat by @roflcoopter in #418
- improve CI by @roflcoopter in #419
- fix cv2 dnn import by @roflcoopter in #420
- remove deprecated set-output by @roflcoopter in #421
- add output_element config option to gstreamer by @roflcoopter in #424
- switch to mkv for certain audio codecs by @roflcoopter in #428
- Viseron v2 by @roflcoopter in #366
- docs: add example configuration by @roflcoopter in #429
New Contributors
- @DevelopIdeas made their first contribution in #324
Full Changelog: v1.10.2...v2.0.0