Skip to content

Releases: roflcoopter/viseron

Segments and Substream

26 Oct 07:40
5012060
Compare
Choose a tag to compare
Pre-release

1.7.0b1 - Segments and Substream

Breaking changes

  • Recorder global_args can no longer be specified

Changes and new Features

  • FFmpeg segments are now used to record instead of caching frames in memory. Closes #26
  • A substream can now be configured which will be used for image processing.
    This can be used to great success to reduce system load. This is huge for smaller devices like an RPi, coupled with the segments above.
  • Upgrades to CUDA 11 and OpenCV 4.5.0 for CUDA image
  • Recordings are encoded to /tmp, then moved to target directory when done. Closes #49

Fixes

  • Fixed a formatting issue with the duplicate log filter
  • Returncodes > 0 from MQTT is now logged as errors in text format. Closes #59

Docker images are available on Docker Hub
roflcoopter/viseron:1.7.0b1
roflcoopter/viseron-cuda:1.7.0b1
roflcoopter/viseron-vaapi:1.7.0b1
roflcoopter/viseron-rpi:1.7.0b1

1.6.2 - Bugfix for MQTT

19 Oct 10:55
5012060
Compare
Choose a tag to compare

Fixes a crash which was detected during troubleshooting of #62 which occurs when MQTT is not connected.

Note: Due to some unfortunate circumstances the cuda image will not be built with this fix.
Instead the fix will be included in 1.7.0 which should come a long shortly.

Docker images are available on Docker Hub
roflcoopter/viseron:1.6.2
roflcoopter/viseron-vaapi:1.6.2
roflcoopter/viseron-rpi:1.6.2

Fixes for codec issues

12 Oct 16:33
88c71d3
Compare
Choose a tag to compare

1.6.0 - Face recognition and MQTT improvements

11 Oct 14:46
8ee3115
Compare
Choose a tag to compare

Breaking changes

  • Detectors now supply their own configuration validators which might break existing configs.
    If you used type: edgetpu and supplied an unsupported configuration option, such as suppression,
    Viseron would think that was okay. That is no longer the case and has to be removed.
    The README is updated accordingly.
  • A new config block has been created for Home Assistant Discovery. It is still enabled by default but if you previously set your own discovery_prefix you now have to move this under home_assistant like this:
    mqtt:
      broker: <ip address or hostname of broker>
      port: <port the broker listens on>
      home_assistant:
        discovery_prefix: <yourcustomprefix>
  • Pretty much all MQTT topics have changed. Have a look at the updated documentation to see how they are structured.

Changes and new Features

  • Face detection is here! See the documentation for how to set this up.
    Here is an example config to get you started
    object_detection:
      labels:
        - label: person
          confidence: 0.8
          post_processor: face_recognition
    
    post_processors:
      face_recognition:
  • Unique ID and Device registry information is now included in the Home Assistant discovery setup.
    This means that you can now customize the entities created in Home Assistant via its interface.
  • The binary sensors created for each label now includes a count attribute which tells you the number of detected objects.
  • Include support for h265 in ffmpeg for all containers
  • vaapi and generic image now default to YOLOv3 model.
    Previously the default was YOLOv3-tiny, which is a lot faster but very inaccurate.
    If you wanna go back to using the tiny version for the sake of reduced CPU you can change these settings:
    object_detection:
      model_path: /detectors/models/darknet/yolov3-tiny.weights
      model_config: /detectors/models/darknet/yolov3-tiny.cfg
  • Labels drawn on the image that is sent when publish_image: true is now clearer
  • Adds support for PCIe based EdgeTPUs
  • Adds a new configuration option called log_all_objects which is placed under object_detection
    object_detection:
      log_all_objects: true
    If set to true, all found objects will be logged if loglevel is DEBUG (this is the behaviour today). The default is false which will only log objects that pass labels filters.
  • Uses ffprobe to get stream information instead of OpenCV.
  • The automatic codec picker is now smarter and will try to find a matching hwaccel decoder codec for the source.
    Right now it only knows about h264 and h265 but this can be expanded in the future.
  • New config option ffmpeg_loglevel under cameras. Use this to debug your camera decoding command.
  • New config option ffmpeg_recoverable_errors under cameras. See the README for a detailed explanation.

Fixes

  • Fixes an error when face_recognition folder is improperly structured
  • EdgeTPU not being properly installed in the RPi image
  • EdgeTPU not being loaded properly
  • Fix a crash when publish_image: true and trigger_detector: false
  • Fix interval under motion_detection and object_detection not allowing floats

Docker images are available on Docker Hub
roflcoopter/viseron:1.6.0
roflcoopter/viseron-cuda:1.6.0
roflcoopter/viseron-vaapi:1.6.0
roflcoopter/viseron-rpi:1.6.0

1.6.0b2

07 Oct 10:25
8827c02
Compare
Choose a tag to compare
1.6.0b2 Pre-release
Pre-release

Changes and new Features

  • Adds support for PCIe based EdgeTPUs
  • Adds a new configuration option called log_all_objects which is placed under object_detection
    object_detection:
      log_all_objects: true
    If set to true, all found objects will be logged if loglevel is DEBUG (this is the behaviour today). The default is false which will only log objects that pass labels filters.
  • Uses ffprobe to get stream information instead of OpenCV.
  • The automatic codec picker is now smarter and will try to find a matching hwaccel decoder codec for the source.
    Right now it only knows about h264 and h265 but this can be expanded in the future.

Fixes

  • Fixes an error when face_recognition folder is improperly structured
  • EdgeTPU not being properly installed in the RPi image
  • EdgeTPU not being loaded properly

Docker images are available on Docker Hub
roflcoopter/viseron:1.6.0b2
roflcoopter/viseron-cuda:1.6.0b2
roflcoopter/viseron-vaapi:1.6.0b2
roflcoopter/viseron-rpi:1.6.0b2

1.6.0b1 - Face recognition and MQTT changes

04 Oct 06:57
Compare
Choose a tag to compare

Breaking changes

  • Detectors now supply their own configuration validators which might break existing configs.
    If you used type: edgetpu and supplied an unsupported configuration option, such as suppression,
    Viseron would think that was okay. That is no longer the case and has to be removed.
    The README is updated accordingly.
  • A new config block has been created for Home Assistant Discovery. It is still enabled by default but if you previously set your own discovery_prefix you now have to move this under home_assistant like this:
    mqtt:
      broker: <ip address or hostname of broker>
      port: <port the broker listens on>
      home_assistant:
        discovery_prefix: <yourcustomprefix>
  • Pretty much all MQTT topics have changed. Have a look at the updated documentation to see how they are structured.

Changes and new Features

  • Face detection is here! See the documentation for how to set this up.
    Here is an example config to get you started
    object_detection:
      labels:
        - label: person
          confidence: 0.8
          post_processor: face_recognition
    
    post_processors:
      face_recognition:
  • Unique ID and Device registry information is now included in the Home Assistant discovery setup.
    This means that you can now customize the entities created in Home Assistant via its interface.
  • The binary sensors created for each label now includes a count attribute which tells you the number of detected objects.
  • Include support for h265 in ffmpeg for all containers
  • vaapi and generic image now default to YOLOv3 model.
    Previously the default was YOLOv3-tiny, which is a lot faster but very inaccurate.
    If you wanna go back to using the tiny version for the sake of reduced CPU you can change these settings:
    object_detection:
      model_path: /detectors/models/darknet/yolov3-tiny.weights
      model_config: /detectors/models/darknet/yolov3-tiny.cfg
  • Labels drawn on the image that is sent when publish_image: true is now clearer

Fixes

  • Fix a crash when publish_image: true and trigger_detector: false
  • Fix interval under motion_detection and object_detection not allowing floats

Docker images are available on Docker Hub
roflcoopter/viseron:1.6.0b1
roflcoopter/viseron-cuda:1.6.0b1
roflcoopter/viseron-vaapi:1.6.0b1
roflcoopter/viseron-rpi:1.6.0b1

1.5.0 - Masks and improvements to motion detector

23 Sep 14:12
cf51d68
Compare
Choose a tag to compare

Lots of goodies in this one!

Breaking changes

  • area for motion_detection is now a percentaged based value and needs to be changed from an int to a float.
    This means that if you change your width and/or height, area wont be affected.

Changes and new Features

  • Masks can now be configured under each cameras motion_detection block.
    Masks are used to limit motion detection from running in a specified area.
    The configuration is similar to how zones are configured, here is an example:

    Config example
    cameras:
      - name: name
        host: ip
        port: port
        path: /Streaming/Channels/101/
        motion_detection:
          area: 0.07
          mask:
            - points:
                - x: 0
                  y: 0
                - x: 250
                  y: 0
                - x: 250
                  y: 250
                - x: 0
                  y: 250
            - points:
                - x: 500
                  y: 500
                - x: 1000
                  y: 500
                - x: 1000
                  y: 750
                - x: 300
                  y: 750

    The masks are drawn on the image published over MQTT. they have an orange border with a black background with 70% opacity.

  • A switch entity is now created in Home Assistant which can be used to arm/dsiarm a camera.
    When disarmed, the decoder stops completly so no system load will be used.

  • Motion contours are now drawn on the image that is being published over MQTT.
    Dark purple contours are smaller than the configured area, while bright pink contours are larger than the configured area.

  • New config option max_timeout under motion_detection which specifies the max number of seconds that motion alone is allowed to keep the recorder going.
    This is used to prevent never-ending recordings when motion detection is too sensitive

  • New config option rtsp_transport under cameras. Change this if your camera doesnt support tcp.

  • VA-API is now installed in the CUDA image for use with ffmpeg decoding/encoding

Fixes

  • The binary sensor for motion detection is now properly set to on/off.

Docker images are available on Docker Hub
roflcoopter/viseron:latest
roflcoopter/viseron-cuda:latest
roflcoopter/viseron-vaapi:latest
roflcoopter/viseron-rpi:latest

1.4.0 - Zones!

12 Sep 20:39
fdb8f1e
Compare
Choose a tag to compare

Changes and new Features

  • Zones are here! You can now configure zones for each camera and specify labels to track per zone.
    Here is an example:

    cameras:
      - name: name
        host: ip
        port: port
        path: /Streaming/Channels/101/
        zones:
          - name: zone1
            points:
              - x: 0
                y: 500
              - x: 1920
                y: 500
              - x: 1920
                y: 1080
              - x: 0
                y: 1080
            labels:
              - label: person
                confidence: 0.9
          - name: zone2
            points:
              - x: 0
                y: 0
              - x: 500
                y: 0
              - x: 500
                y: 500
              - x: 0
                y: 500
            labels:
              - label: cat
                confidence: 0.5

    A polygon will be drawn on the image using each point. Atleast 3 points have to be supplied.
    If you are using Home Assistant, Viseron will publish an image to the camera entity over MQTT
    with zones and objects drawn upon it.
    The drawing and publishing takes some processing power so it should only be used for debugging and tuning.

  • A boatload of new binary sensors are now created, tracking objects and zones. Checkout the README for a detailed explanation.

  • Allows a logging block to be entered per camera.

  • Logging from motion detection is now named per camera.

  • The logger for the recorder is now named per camera.

  • You can now set log level individually for motion_detection and object_detection, either globally or for each camera.

  • New config option publish_image.
    If enabled, Viseron will publish an image to the MQTT camera entity with objects and zones drawn upon it.

  • You can now specify width, height and fps individually in the camera config.

  • New config option triggers_recording for labels, if set to false, only the binary sensors in mqtt will update but no recording will start. This works on all labels configs (global object detector, per camera or per zone)

  • Recorded videos will now be saved under a specific folder per camera.

Fixes

  • TFLite is now properly installed in the generic, vaapi and cuda image for EdgeTPU support.
  • Fixed motion detection not respecting timeout.

Docker images are available on Docker Hub
roflcoopter/viseron
roflcoopter/viseron-cuda
roflcoopter/viseron-vaapi
roflcoopter/viseron-rpi

1.4.0.b2 - Improvements to zones and bug fixes

10 Sep 05:51
Compare
Choose a tag to compare

Changes and new Features

  • Allows a logging block to be entered per camera.
  • New config option publish_image.
    If enabled, Viseron will publish an image to the MQTT camera entity with objects and zones drawn upon it. If a tracked object is in a zone the zone will turn green.
  • You can now specify width, height and fps individually in the camera config.

Fixes

  • The binary sensors created for zones and labels in zones are now properly turned on/off.

Docker images are available on Docker Hub
roflcoopter/viseron:dev
roflcoopter/viseron-cuda:dev
roflcoopter/viseron-vaapi:dev
roflcoopter/viseron-rpi:dev

1.4.0.b1 - Zones!

08 Sep 09:07
Compare
Choose a tag to compare
1.4.0.b1 - Zones! Pre-release
Pre-release

Changes and new Features

  • Zones are here! Functionality is somewhat limited atm but i need some testers on this as its quite a big refactor.
    You can now configure zones for each camera and specify labels to track per zone.
    This is not reflected in the documentation just yet, but here is an example:

    cameras:
      - name: name
        host: ip
        port: port
        path: /Streaming/Channels/101/
        zones:
          - name: zone1
            points:
              - x: 0
                y: 500
              - x: 1920
                y: 500
              - x: 1920
                y: 1080
              - x: 0
                y: 1080
            labels:
              - label: person
                confidence: 0.9
          - name: zone2
            points:
              - x: 0
                y: 0
              - x: 500
                y: 0
              - x: 500
                y: 500
              - x: 0
                y: 500
            labels:
              - label: cat
                confidence: 0.5

    A polygon will be drawn on the image using each point. Atleast 3 points have to be supplied.
    If you are using Home Assistant Viseron will publish an image to the camera entity over MQTT
    with zones and objects drawn upon it.
    The drawing and publishing takes some processing power so in the coming beta releases this will be configurable.

    A few new binary sensors will also be created, one for each zone and one for each label in the zone.
    The zone binary sensor will turn on when atleast one tracked object is in the zone.
    The label binary sensor will turn on when atleast one matching object is in the zone.

Docker images are available on Docker Hub
roflcoopter/viseron:dev
roflcoopter/viseron-cuda:dev
roflcoopter/viseron-vaapi:dev
roflcoopter/viseron-rpi:dev