Releases: roflcoopter/viseron
v2.0.0b3 - Compreface
What's Changed
- add compreface component by @roflcoopter in #384
- add websocket ping for more stable connections by @roflcoopter in #385
- Feature/warn on missing nvr by @roflcoopter in #386
Full Changelog: v2.0.0b2...v2.0.0b3
v2.0.0b2 - Bugfixes
Fixes an issue where bounding boxes were not displaying correctly for darknet
Fixes permission issues for edgetpu
Upgrades dlib to 19.24 to support more CUDA compute capabilities
What's Changed
- docs: add redirects to fix 404 by @roflcoopter in #377
- index pages as well for easier search by @roflcoopter in #378
- scale the non-padded axis by @roflcoopter in #379
- make sure user abc is always in plugdev group by @roflcoopter in #380
- update dlib to 19.24 and set compute capabilities by @roflcoopter in #381
- Fix error that was thrown if a domain was setup with multiple missing deps by @roflcoopter in #382
Full Changelog: v2.0.0b1...v2.0.0b2
2.0.0b1 - Viseron rewrite
This release features a massive rewrite that I have been working on for the past year.
It focuses on decoupling all parts of Viseron, making it more modularized. This allows for easier integration of new functionality.
The config.yaml file will change completely so there is some work that has to be done by each user to port over to the new version.
The general config format is a component
which implements one or more domains
.
Each camera has a unique camera identifier
which flows through the entire configuration.
The nvr
component then ties all these different domains together and provides the full functionality.
The big benefit of this new format is that you can mix and match components more freely.
For instance, you could use different object detectors for different cameras, you are not tied into just one.
Config Example
ffmpeg: # <-- component
camera: # <-- domain
camera_one: # <-- camera identifier
name: Camera 1
host: 192.168.10.10
username: test
password: test
camera_two: # <-- camera identifier
name: Camera 2
host: 192.168.10.11
username: test
password: test
....
darknet: # <-- component
object_detector: # <-- domain
model: /my_custom_model/model.weights
cameras:
camera_one: # <-- camera identifier
fps: 5
deepstack: # <-- component
host: deepstack # <-- component config option
port: 5000 # <-- component config option
object_detector: # <-- domain
cameras:
camera_two: # <-- camera identifier
fps: 1
labels:
- label: person
confidence: 0.75
trigger_recorder: false
face_recognition: # <-- domain
cameras:
camera_one:
camera_two:
labels:
- person
background_subtractor: # <-- component
motion_detector: # <-- domain
cameras:
camera_one: # <-- camera identifier
fps: 1
mask:
- coordinates:
- x: 400
y: 200
- x: 1000
y: 200
- x: 1000
y: 750
- x: 400
y: 750
nvr: # <-- component
camera_one: # <-- camera identifier
camera_two: # <-- camera identifier
Frontend
There is now a UI included in Viseron, writen in React/TypeScript.
It is enabled by default and can be reached on port 8888 inside the container by default.
Currently it features basic functionality like viewing cameras, recordings and editing the configuration.
It will be expanded upon in the future.
Jetson Nano support
This release also features better Jetson Nano support through the gstreamer
component.
This means that you can utilize your Nano for hardware accelerated camera decoding and object detection.
Documentation site
The documentation has moved over from being in a README format to a dedicated site.
Hopefully it will help you use Viseron better.
I need your help on how it can be improved, writing good docs is always hard.
I am especially proud of the components page, which is mostly generated from the config schema used.
This means that the documentation and schema will always match, yay!
The documentation is hosted here: https://viseron.netlify.app/
What's Changed
- remove interval and replace with fps by @roflcoopter in #287
- Add newline after jpgboundary by @olekenneth in #355
- Add new line to jpg boundary refactoring by @olekenneth in #356
- Viseron v2 by @roflcoopter in #306
- Feature/remove dead code by @roflcoopter in #367
- fix automatic pipeline trigger error after rename by @roflcoopter in #368
- fix jetson nano crossbuilds after balenalib breaking change by @roflcoopter in #369
- make README more slim by @roflcoopter in #370
- fix broken link to contributing guidelines by @roflcoopter in #371
Full Changelog: v1.10.2...v2.0.0b1
1.10.2 - Substream bugfix
Fix a deadlock that appears when using a substream
What's Changed
- resolve deadlock when using substream by @roflcoopter in #302
Full Changelog: v1.10.1...v1.10.2
1.10.1
Bugfix release, see autogenerated changelog below
What's Changed
- properly override config for motion detection by @roflcoopter in #285
- v1.10.1 by @roflcoopter in #286
Full Changelog: v1.10.0...v1.10.1
1.10.0 - UI!
The work on a UI has started thanks to @danielperna84
As of now you can view your MJPEG streams, watch recordings and also change the config.yaml file from the interface.
There are also new Contribution guidelines which hopefully makes it easier for folks to dive into the project.
Breaking changes
- Decoder
h264_nvv4l2dec
andhevc_nvv4l2dec
is no longer available for the Jetson Nano.
Useh264_nvmpi
orhevc_nvmpi
instead. - Frames that are older than two seconds will now be dropped from the detection queue.
If you have a slow system you might have to raise this number by using themax_frame_age
config option.
New features
- We now have a basic UI!
Thanks to @danielperna84 the work on a UI has begun.
Functionality is right now pretty limited, but will be improved upon.
By releasing this we hope to get some more contributors on board. - New config option
device
foredgetpu
detector.
Lets you specify which EdgeTPU device to use if you have multiple. - New config option
mask
under a camerasobject_detection
block.
Use this to completely ignore objects that are within the mask.
Closes #199 - There are now multiple kinds of motion detectors. Check out Background subtractor MOG2 which might be more reliable for you.
- Background subtractor (the default, same motion detection as always)
- Background subtractor MOG2
- The status sensor for each camera now has a new state,
disconnected
, when cameras cant be reached.
Use this information to know if Viseron is connected to all your cameras and possibly send an alarm on disconnects. - New config option
max_frame_age
. Frames that are older than this number will be dropped from the detection queue.
If you have a slow system you might need to raise this number.
Changes
- Jetson Nano image now defaults to
h264_nvmpi
andhevc_nvmpi
decoders. - Upgrade to CUDA 11.4.1 for
amd64-cuda-*
images
Fixes
- Fixed a bug that could cause object detection to return duplications across all cameras when running YOLO on CUDA
Docker images will be on Docker Hub shortly
Automatically generated changelog
- Pytest by @roflcoopter in #222
- list all changed files in ci by @roflcoopter in #240
- Edgetpu pci permissions by @roflcoopter in #242
- Object mask by @roflcoopter in #243
- build nvmpi and pin ffmpeg=4.2 for jetson nano by @roflcoopter in #244
- pin numpy and install scikit-learn from pip by @roflcoopter in #245
- Revert "pin numpy and install scikit-learn from pip" by @roflcoopter in #247
- set max size in detector queue by @roflcoopter in #248
- Update README.md by @magicmonkey in #255
- fix libedgetpu1 installation by @roflcoopter in #265
- Simple UI by @danielperna84 in #266
- Motion detection revamp by @roflcoopter in #269
- upgrade to cuda 11.4.1 by @roflcoopter in #268
- set camera status on disconnect by @roflcoopter in #273
- Detector duplication fix by @roflcoopter in #274
- OpenCV 4.5.3 by @roflcoopter in #275
- implement basic API boilerplate by @roflcoopter in #272
- fix typo in opencv build command by @roflcoopter in #276
- try to speed up OpenCV build time by @roflcoopter in #277
- add dummy setup.py file to make tox happy by @roflcoopter in #282
- Dev by @roflcoopter in #283
New Contributors
- @magicmonkey made their first contribution in #255
- @danielperna84 made their first contribution in #266
Full Changelog: v1.9.0...v1.10.0
# 1.9.0 - DeepStack, Jetson Nano and stability improvements
This release brings support for DeepStack object detection aswell as face detection.
Another exciting feature is the support for the Jetson Nano!
Other than that I have spent a lot of effort to increase the stability of Viseron.
Viseron now handles missbehaving cameras in a much better way.
If a camera is unavailable during startup, Viseron will continue to try and reconnect. This is to help against temporarily unavailable cameras.
Viseron carefully monitors any subprocesses or threads that are started, and in the case of one crashing, it will be restarted accordingly.
There are a few more goodies in this one, aswell as some breaking changes. Make sure to read the changelog below.
Enjoy!
Breaking changes
triggers_recording
underlabels
has been deprecated.
Usetrigger_recorder
instead. A warning will be printed if you still usetriggers_recording
, reminding you to change your config.- The endpoint
/<camera>/stream
has been renamed to/<camera>/mjpeg-stream
. If you still use the old endpoint Viseron will warn you and then redirect to the new URL. In the future this will produce an error.
New features
- DeepStack object detection is now available.
See the new section in the README - The post processor class is now more general which means adding new processors is a piece of cake.
- DeepStack face recognition is now available.
See the new section in the README - Jetson Nano support.
An image specifically made for the Jetson Nano is now available:roflcoopter/jetson-nano-viseron
.
Hardware accelerated decoding of cameras is supported, aswell as YOLOv4 with CUDA backend. - New config option
filename_pattern
underrecorder
andthumbnail
. Allows you to change the output filename of recordings and thumbnails. - New config option
ffprobe_loglevel
- An alias is created for each ffmpeg process to make it easy to spot which ffmpeg process belongs to which camera.
If you have a camera called Front Door, your ffmpeg command will start withfront_door
instead offfmpeg
- New config option
color_logs
which controls if logs are colored or not. Enabled by default. - New config option
trigger_recorder
undermotion_detection
which allows motion to start recordings. - New config option
enable
underobject_detection
which allows you to disable object detection if you want to run motion_detection only.
Changes
- All docker images are now based on Ubuntu Focal(20.04) which ships Python 3.8
- OpenCV upgraded to 4.5.2
- FFmpeg upgraded to 4.4
- dlib upgraded to 19.22
- OpenCL upgraded to 21.15.19533
- The detector class is now more general which means adding new detectors is a piece of cake.
amd64-viseron
image now once again defaults to YOLOv3.
This is done because YOLOv4 has performance issues that are not present when running CUDA.
YOLOv4 models are still provided so you can pick and choose which one you like if YOLOv3 does not suit you.- Username and passwords are now redacted from logs in the FFmpeg command
- All long-running theads are now monitored and automatically restarted in case of an exception.
The FFmpeg process is also monitored. Closes #109. Closes #11. Closes #113 - Permissions are now automatically set when not running as root for EdgeTPU and all hardware acceleration methods. Closes #173. Closes #176
- Viseron now once again runs as root by default. This is to make sure Viseron always works for new users.
Setting PUID and PGID to something other than root works almost always so if you rather not run as root it should work fine. - Subprocesses like the one used for segments only is now monitored and restarted accordingly.
- ffmpeg errors will now be logged to the standard python logging instead of being suppressed
- Cameras are now setup completely independent of eachother.
Previously if one camera was blocking, no other camera would process frames. - If setup of a camera fails, the setup process will be retried indefinitely.
This is done to guard against cameras that are temporarily unavailable at startup.
Fixes
- Fix a crash caused by using
require_motion: true
on a label inside a zone while not tracking it in the entire field of view.
Docker images will be on Docker Hub shortly
1.8.0 - Docker revamp
1.8.0 - Docker revamp
This release brings a huge revamp to the Docker containers.
This was a very big change and some bugs might still be present.
There is also experimental support for the RPi4 which might not work as expected in this first version.
The support for RPi4 will improve in the coming version, but i need your help with testing and reporting bugs.
Breaking changes
-
Docker image names have changed according to this table.
Note thatroflcoopter/viseron
is now a multiarch image which means you dont need to pick between the images yourself, unless you want CUDA support.Old New roflcoopter/viseron
roflcoopter/amd64-viseron
roflcoopter/viseron-cuda
roflcoopter/amd64-cuda-viseron
roflcoopter/viseron-vaapi
Removed, same as roflcoopter/amd64-viseron
roflcoopter/viseron-rpi3
roflcoopter/rpi3-viseron
N/A roflcoopter/aarch64-viseron
-
port
is now required forsubstream
Changes and new Features
-
New config option
require_motion
for labels.
If set to true, the recorder will stop when motion is no longer detected, even if the object still is.
This is useful to avoid never ending recordings of stationary objects, such as a car on a driveway. -
Complete rewrite of all Dockerfiles(!).
Multistage builds are now used extensively which dramatically reduces the size of the containers.
All containers are now built on Azure Pipelines which means i no longer have to build them all locally(!!).
Cross-building is done using Balenalibs baseimages, which means we now have (experimental) support for the RPi4(!!!).
This new way of working with containers mean i can easily support different hardware, such as the Jetson Nano in the near future.
Multiarch images are also in play, which means you dont need to pull different images based on your architecture, unless you want a specific one, like the amd64 CUDA version. Closes #1, closes #66 -
Static MJPEG streams can now be configured, which provides better performance due to processing only happening once
See the new section on Static MJPEG Streams in the README for more information. Closes #23 -
An MJPEG stream is now served for each camera.
A number of query parameters are available to control resolution, what is drawn on the frames etc.
See the new section in the README for more information. Closes #23 -
stream_format
andport
is now supported forsubstream
. Closes #112 -
Viseron no longer runs as root in the containers. You can now set PUID and PGID as environment variables to control the user.
If you are using docker-compose it might look like this:version: "2.4" services: viseron: image: roflcoopter/viseron:latest container_name: viseron volumes: - <recordings path>:/recordings - <config path>:/config - /etc/localtime:/etc/localtime:ro environment: - PUID=1000 - PGID=1000
-
Pin
libedgetpu-max
to avoid version mismatch -
New config option
save_unknown_faces
for dlib face recognition.
If set, unknown faces will be saved in a folder named unknown next to your regular faces folders.
These can then be manually moved to the folder of the correct person to improve accuracy. Closes #90, Closes #96 -
Audio is now recorded by default if the camera supports it. If you want you can specify the audio codec with
audio_codec
underrecorder
. This feature is experimental and i need some testers on it. -
Default to YOLOv4 instead of YOLOv4-tiny. You can go back yo using YOLOv4-tiny by changing
model_path
.
Fixes
- The labels file was corrupted in the amd64* images, so labels of objects made no sense.
- Better reporting of errors when loading EdgeTPU, fixes #76
Docker images will be on Docker Hub shortly
v1.8.0b6
1.8.0b6
Changes and new Features
-
Pin
libedgetpu-max
to avoid version mismatch -
New config option
save_unknown_faces
for dlib face recognition.
If set, unknown faces will be saved in a folder named unknown next to your regular faces folders.
These can then be manually moved to the folder of the correct person to improve accuracy. Closes #90, Closes #96 -
Audio is now recorded by default if the camera supports it. If you want you can specify the audio codec with
audio_codec
underrecorder
. This feature is experimental and i need some testers on it. -
Default to YOLOv4 instead of YOLOv4-tiny
Fixes
- Permissions to access
/dev/dri
to utilize VAAPI
Previous 1.8.0 betas
Breaking changes
port
is now required forsubstream
Changes and new Features
-
New config option
require_motion
for labels.
If set to true, the recorder will stop when motion is no longer detected, even if the object still is.
This is useful to avoid never ending recordings of stationary objects, such as a car on a driveway. -
Complete rewrite of all Dockerfiles(!).
Multistage builds are now used extensively which dramatically reduces the size of the containers.
All containers are now built on Azure Pipelines which means i no longer have to build them all locally(!!).
Cross-building is done using Balenalibs baseimages, which means we now have (experimental) support for the RPi4(!!!).
This new way of working with containers mean i can easily support different hardware, such as the Jetson Nano in the near future.
Multiarch images are also in play, which means you dont need to pull different images based on your architecture, unless you want a specific one, like the amd64 CUDA version. Closes #1, closes #66 -
Static MJPEG streams can now be configured, which provides better performance due to processing only happening once
See the new section on [Static MJPEG Streams(https://github.com/roflcoopter/viseron#static-mjpeg-streams) in the README for more information. Closes #23 -
An MJPEG stream is now served for each camera.
A number of query parameters are available to control resolution, what is drawn on the frames etc.
See the new section in the README for more information. Closes #23 -
stream_format
andport
is now supported forsubstream
. Closes #112 -
Viseron no longer runs as root in the containers. You can now set PUID and PGID as environment variables to control the user.
If you are using docker-compose it might look like this:version: "2.4" services: viseron: image: roflcoopter/viseron:latest container_name: viseron volumes: - <recordings path>:/recordings - <config path>:/config - /etc/localtime:/etc/localtime:ro environment: - PUID=1000 - PGID=1000
Fixes
- The labels file was corrupted in the amd64* images, so labels of objects made no sense.
- Better reporting of errors when loading EdgeTPU, fixes #76
Docker images will be on Docker Hub shortly
1.8.0b5 - Fix of labels file in amd64 images.
Changes and new Features
- New config option
require_motion
for labels.
If set to true, the recorder will stop when motion is no longer detected, even if the object still is.
This is useful to avoid never ending recordings of stationary objects, such as a car on a driveway.
Fixes
- The labels file was corrupted in the amd64* images, so labels of objects made no sense.
Previous 1.8.0 betas
Breaking changes
port
is now required forsubstream
Changes and new Features
-
Complete rewrite of all Dockerfiles(!).
Multistage builds are now used extensively which dramatically reduces the size of the containers.
All containers are now built on Azure Pipelines which means i no longer have to build them all locally(!!).
Cross-building is done using Balenalibs baseimages, which means we now have (experimental) support for the RPi4(!!!).
This new way of working with containers mean i can easily support different hardware, such as the Jetson Nano in the near future.
Multiarch images are also in play, which means you dont need to pull different images based on your architecture, unless you want a specific one, like the amd64 CUDA version. Closes #1, closes #66 -
Static MJPEG streams can now be configured, which provides better performance due to processing only happening once
See the new section on [Static MJPEG Streams(https://github.com/roflcoopter/viseron#static-mjpeg-streams) in the README for more information. Closes #23 -
An MJPEG stream is now served for each camera.
A number of query parameters are available to control resolution, what is drawn on the frames etc.
See the new section in the README for more information. Closes #23 -
stream_format
andport
is now supported forsubstream
. Closes #112 -
Viseron no longer runs as root in the containers. You can now set PUID and PGID as environment variables to control the user.
I you are using docker-compose it might look like this:version: "2.4" services: viseron: image: roflcoopter/viseron:latest container_name: viseron volumes: - <recordings path>:/recordings - <config path>:/config - /etc/localtime:/etc/localtime:ro environment: - PUID=1000 - PGID=1000
Fixes
- Better reporting of errors when loading EdgeTPU, fixes #76
Docker images will be on Docker Hub shortly