0.15.0 Release Candidate 1 #16131
Replies: 11 comments 32 replies
-
Upgraded from 0.14.1, getting repeated disconnects from Unifi Cameras. CPU seems to spike 10-20% as the cameras repeatedly reconnect.
|
Beta Was this translation helpful? Give feedback.
-
cant export in timelapse, getting this error, regular exports work, on the RC 1
|
Beta Was this translation helpful? Give feedback.
-
I think the 640x640 makes me want to switch from my Coral to my Nvidia 2060. Since I have Frigate+ and my custom model, is this the only change needed?
Can I ignore the instructions from the docs below? |
Beta Was this translation helpful? Give feedback.
-
So now that v0.15.0 is in RC1, when are we getting v0.16.0-beta1? XD All jokes aside, thanks for all the work you guys put into this project for us all! The last couple of versions have seen some incredible improvements to functionality and accuracy, looking forward to seeing what's to come! |
Beta Was this translation helpful? Give feedback.
-
Could someone give me a hint? Spent more time not understanding the error, or why I should change the ?PYTHONHOME? in a container than I'm willing to admit. Get the same error even with almost empty config and "fresh" config and media folders.
|
Beta Was this translation helpful? Give feedback.
-
Is it possible to maintain zoom level while scrubbing thru footage? If not, would it be entertained as a feature? If so I can open a feature request issue |
Beta Was this translation helpful? Give feedback.
-
I have this strange problem, at least since beta7, but possibly before. When updating the description of a snapshot in Tracked Object Details, I cannot type a lowercase 'a' character. At first I thought it was my keyboard, but it works everywhere else, and I can copy and paste the letter into the text box from a different window. |
Beta Was this translation helpful? Give feedback.
-
Hmm, I'm currently running RC1 |
Beta Was this translation helpful? Give feedback.
-
Running into an issue which I have seen in older releases and is now back again and cant seem to navigate around it. I have 3 Reolink cameras and one RSTP feed from a wyze camera. For what ever reason the Reolink cameras randomly keep showing "no frames have been received" in some cases if I click into the camera its working other cases its really not. I have tried several different ffmpeg and HW acceleration configurations and cant seem to get it to consistently work with 0.15 RC1. The logs show several different issues, everything from not receiving data to incorrect data and or time stamps and ffmpeg crashing. Configmqtt:
enabled: true
host: 192.168.100.226
topic_prefix: frigate
user: frigate
password: *****
stats_interval: 60
birdseye:
enabled: true
#restream: true
width: 1920
height: 1080
mode: continuous
##tried different global modes and also removing to let frigate decide nothing helped
#ffmpeg:
# hwaccel_args: preset-vaapi
go2rtc:
streams:
Teale_Front_Bullet:
- rtsp://***:{FRIGATE_RTSP_PASSWORD}@10.10.100.6:554/h265Preview_01_main
- ffmpeg:Teale_Front_Bullet#video=h264#hardware#audio=opus
Teale_Front_Dome:
- rtsp://***:{FRIGATE_RTSP_PASSWORD}@10.10.100.4:554/h265Preview_01_main
- ffmpeg:Teale_Front_Dome#video=h264#hardware#audio=opus
Teale_Front_Door:
- ffmpeg:https://10.10.100.5/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=***&password={FRIGATE_RTSP_PASSWORD}#video=copy#audio=copy#audio=opus
- rtsp://***:{FRIGATE_RTSP_PASSWORD}@10.10.100.5:554/h264Preview_01_sub
#- ffmpeg:Teale_Front_Door#video=copy#audio=copy#audio=opus
Teale_Garage_Door:
- rtsp://192.168.100.26:8554/garage-cam
#- ffmpeg:Teale_Garage_Door#video=h264#hardware#audio=opus
cameras:
Garage_Door:
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/Teale_Garage_Door
input_args: preset-rtsp-restream
roles:
- record
- detect
hwaccel_args: preset-intel-qsv-h264
motion:
mask: 0.781,0.944,0.99,0.95,0.987,0.986,0.782,0.995
threshold: 35
contour_area: 10
improve_contrast: true
zones:
Side_of_house:
coordinates: 0.008,0.476,0.109,0.335,0.544,0.132,0.992,0.198,0.992,0.972,0.005,0.958
loitering_time: 0
review:
alerts:
required_zones: Side_of_house
Teale_Front_Bullet:
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/Teale_Front_Bullet
input_args: preset-rtsp-restream
roles:
- record
- detect
hwaccel_args: preset-intel-qsv-h265
output_args:
record: preset-record-generic-audio-aac
motion:
threshold: 45
lightning_threshold: 0.7
improve_contrast: false
mask:
- 0,130,1920,100,1920,0,0,0
zones:
zone_main_grass:
coordinates: 543,612,1113,531,1391,627,1568,689,1702,863,1920,1020,1920,1080,21,1080,77,686
zone_curb_strip:
coordinates: 77,647,1167,482,1019,455,79,584
inertia: 3
objects:
- car
- truck
objects:
filters:
person: {}
car:
mask:
- 1920,1080,0,1080,76,592,1044,453,1573,681
review:
alerts:
required_zones: zone_main_grass
Teale_Front_Dome:
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/Teale_Front_Dome
input_args: preset-rtsp-restream
roles:
- record
- detect
hwaccel_args: preset-intel-qsv-h265
output_args:
record: preset-record-generic-audio-aac
motion:
threshold: 45
lightning_threshold: 0.7
improve_contrast: false
mask:
- 1920,108,1920,0,494,0,0,0,0,129
zones:
zone_driveway:
coordinates: 0,1080,1920,1080,1920,793,511,583,0,691
review:
alerts:
required_zones: zone_driveway
detect:
annotation_offset: 0
Teale_Front_Door:
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/Teale_Front_Door
input_args: preset-rtsp-restream
roles:
- record
- detect
hwaccel_args: preset-vaapi
output_args:
record: preset-record-generic-audio-copy
motion:
threshold: 45
lightning_threshold: 0.7
contour_area: 20
improve_contrast: false
mask:
- 1920,0,1920,267,1920,528,0,412,0,0
objects:
filters:
car:
mask:
- 0,1080,249,1080
- 0,957,0,1080,1920,1080,1920,102,1363,387,1371,851,1019,778,130,912
zones:
zone_main_grass:
coordinates: 409,668,943,713,957,795,769,832,510,890,254,963,119,878,0,780,0,691
zone_curb_strip:
coordinates: 1075,710,1045,680,458,657,395,673
front_door:
coordinates: 0,0.886,0,1,1,1,1,0.094,0.71,0.358,0.714,0.788,0.531,0.72,0.068,0.844
inertia: 3
loitering_time: 0
zone_driveway:
coordinates: 0.546,0.729,0.627,0.828,0.711,0.799,0.721,0.65,0.593,0.666,0.535,0.666
inertia: 3
loitering_time: 0
review:
alerts:
required_zones:
- front_door
- zone_main_grass
detections:
required_zones:
- front_door
- zone_driveway
- zone_curb_strip
- zone_main_grass
detectors:
# Required: name of the detector
#detector_1:
# Required: type of the detector
# Frigate provided types include 'cpu', 'edgetpu', 'openvino' and 'tensorrt' (default: shown below)
# Additional detector types can also be plugged in.
# Detectors may require additional configuration.
# Refer to the Detectors configuration page for more information.
# type: cpu
OV1:
type: openvino
device: GPU
# path: /openvino-model/yolox_tiny.xml
model:
model_type: yolonas
width: 320
height: 320
input_tensor: nchw
input_pixel_format: bgr
path: /openvino-model/custom_model/yolo_nas_s~20240725-010247.onnx
#labelmap_path: /openvino-model/coco_91cl_bkgr.txt
labelmap_path: /openvino-model/custom_model/coco_80cl.txt
detect:
# Optional: width of the frame for the input with the detect role (default: use native stream resolution)
width: 1920
# Optional: height of the frame for the input with the detect role (default: use native stream resolution)
height: 1080
# Optional: desired fps for your camera for the input with the detect role (default: shown below)
# NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
fps: 5
# Optional: enables detection for the camera (default: True)
#enabled: True
# Optional: Number of consecutive detection hits required for an object to be initialized in the tracker. (default: 1/2 the frame rate)
#min_initialized: 2
# Optional: Number of frames without a detection before Frigate considers an object to be gone. (default: 5x the frame rate)
#max_disappeared: 25
# Optional: Configuration for stationary object tracking
#stationary:
# Optional: Frequency for confirming stationary objects (default: same as threshold)
# When set to 1, object detection will run to confirm the object still exists on every frame.
# If set to 10, object detection will run to confirm the object still exists on every 10th frame.
# interval: 100
# Optional: Number of frames without a position change for an object to be considered stationary (default: 10x the frame rate or 10s)
# threshold: 150
# Optional: Define a maximum number of frames for tracking a stationary object (default: not set, track forever)
# This can help with false positives for objects that should only be stationary for a limited amount of time.
# It can also be used to disable stationary object tracking. For example, you may want to set a value for person, but leave
# car at the default.
# max_frames:
# Optional: Object specific values
# objects:
# car: 1000
# truck: 1000
objects:
# Optional: list of objects to track from labelmap.txt (default: shown below)
track:
- person
- car
- truck
- bicycle
- cat
- dog
- bird
- motorcylce
- ups
- fedex
# Optional: Motion configuration
# NOTE: Can be overridden at the camera level
motion:
# Optional: The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. (default: shown below)
# Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.
# The value should be between 1 and 255.
threshold: 35
# Optional: The percentage of the image used to detect lightning or other substantial changes where motion detection
# needs to recalibrate. (default: shown below)
# Increasing this value will make motion detection more likely to consider lightning or ir mode changes as valid motion.
# Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching
# a doorbell camera.
lightning_threshold: 0.7
# Optional: Minimum size in pixels in the resized motion image that counts as motion (default: shown below)
# Increasing this value will prevent smaller areas of motion from being detected. Decreasing will
# make motion detection more sensitive to smaller moving objects.
# As a rule of thumb:
# - 10 - high sensitivity
# - 30 - medium sensitivity
# - 50 - low sensitivity
#contour_area: 10
# Optional: Alpha value passed to cv2.accumulateWeighted when averaging frames to determine the background (default: shown below)
# Higher values mean the current frame impacts the average a lot, and a new object will be averaged into the background faster.
# Low values will cause things like moving shadows to be detected as motion for longer.
# https://www.geeksforgeeks.org/background-subtraction-in-an-image-using-concept-of-running-average/
#frame_alpha: 0.01
# Optional: Height of the resized motion frame (default: 100)
# Higher values will result in more granular motion detection at the expense of higher CPU usage.
# Lower values result in less CPU, but small changes may not register as motion.
#frame_height: 100
# Optional: motion mask
# NOTE: see docs for more detailed info on creating masks
# mask: 0,900,1080,900,1080,1920,0,1920
# Optional: improve contrast (default: shown below)
# Enables dynamic contrast improvement. This should help improve night detections at the cost of making motion detection more sensitive
# for daytime.
improve_contrast: false
# Optional: Delay when updating camera motion through MQTT from ON -> OFF (default: shown below).
mqtt_off_delay: 30
# Optional: Record configuration
# NOTE: Can be overridden at the camera level
record:
# Optional: Enable recording (default: shown below)
# WARNING: If recording is disabled in the config, turning it on via
# the UI or MQTT later will have no effect.
enabled: true
# Optional: Number of minutes to wait between cleanup runs (default: shown below)
# This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
expire_interval: 60
# Optional: Sync recordings with disk on startup and once a day (default: shown below).
sync_recordings: true
# Optional: Retention settings for recording
retain:
# Optional: Number of days to retain recordings regardless of events (default: shown below)
# NOTE: This should be set to 0 and retention should be defined in events section below
# if you only want to retain recordings of events.
days: 14
# Optional: Mode for retention. Available options are: all, motion, and active_objects
# all - save all recording segments regardless of activity
# motion - save all recordings segments with any detected motion
# active_objects - save all recording segments with active/moving objects
# NOTE: this mode only applies when the days setting above is greater than 0
mode: all
# Optional: Recording Export Settings
alerts:
retain:
days: 14
pre_capture: 10
post_capture: 10
detections:
retain:
days: 14
pre_capture: 10
post_capture: 10
snapshots:
# Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
enabled: true
# Optional: save a clean PNG copy of the snapshot image (default: shown below)
clean_copy: true
# Optional: print a timestamp on the snapshots (default: shown below)
timestamp: true
# Optional: draw bounding box on the snapshots (default: shown below)
bounding_box: true
# Optional: crop the snapshot (default: shown below)
crop: false
# Optional: height to resize the snapshot to (default: original size)
#height: 175
# Optional: Restrict snapshots to objects that entered any of the listed zones (default: no required zones)
#required_zones: []
# Optional: Camera override for retention settings (default: global values)
retain:
# Required: Default retention days (default: shown below)
default: 30
# Optional: Per object retention days
#objects:
# person: 45
# Optional: quality of the encoded jpeg, 0-100 (default: shown below)
quality: 80
camera_groups:
Test:
order: 1
icon: LuAlertTriangle
cameras:
- Teale_Front_Bullet
- Teale_Front_Dome
- Teale_Front_Door
- Garage_Door
Birds_eye:
order: 2
icon: LuBird
cameras: birdseye
telemetry:
# Optional: Enabled network interfaces for bandwidth stats monitoring (default: empty list, let nethogs search all)
# Optional: Configure system stats
stats:
# Enable AMD GPU stats (default: shown below)
amd_gpu_stats: false
# Enable Intel GPU stats (default: shown below)
intel_gpu_stats: true
# Enable network bandwidth stats monitoring for camera ffmpeg processes, go2rtc, and object detectors. (default: shown below)
# NOTE: The container must either be privileged or have cap_net_admin, cap_net_raw capabilities enabled.
network_bandwidth: true
# Optional: Enable the latest version outbound check (default: shown below)
# NOTE: If you use the HomeAssistant integration, disabling this will prevent it from reporting new versions
version_check: true
semantic_search:
enabled: true
reindex: false
model_size: large
genai:
enabled: false
provider: ollama
base_url: http://192.168.100.151:11434
model: llava:7b
version: 0.15-1 go2rtc log
Frigate Log
|
Beta Was this translation helpful? Give feedback.
-
I'm getting a really odd one on 0.15.0-rc1 that I can't seem to figure out. I'm running on a NUC i5 gen 8. Proxmox LXC. I cloned the LXC from my working Frigate (0.14.1) and gutted the config down to the recommended bare minimum. I have no errors in the logs. The UI is... almost non-responsive. I can get screens like configuration editor to come up, but only after a 2nd (or later) attempt at clicking the menu item. I can attach logs/config etc. but there's not much to see. |
Beta Was this translation helpful? Give feedback.
-
Config:
docker logs -f frigate15:
UI that usually appears |
Beta Was this translation helpful? Give feedback.
-
Beta Documentation: https://deploy-preview-13787--frigate-docs.netlify.app/
Images
ghcr.io/blakeblackshear/frigate:0.15.0-rc1
ghcr.io/blakeblackshear/frigate:0.15.0-rc1-standard-arm64
ghcr.io/blakeblackshear/frigate:0.15.0-rc1-tensorrt
ghcr.io/blakeblackshear/frigate:0.15.0-rc1-tensorrt-jp4
ghcr.io/blakeblackshear/frigate:0.15.0-rc1-tensorrt-jp5
ghcr.io/blakeblackshear/frigate:0.15.0-rc1-rk
ghcr.io/blakeblackshear/frigate:0.15.0-rc1-rocm
ghcr.io/blakeblackshear/frigate:0.15.0-rc1-h8l
Changes since Beta 7
Major Changes for 0.15.0
Breaking Changes
There are several breaking changes in this release, Frigate will attempt to update the configuration automatically. In some cases manual changes may be required. It is always recommended to back up your current config and database before upgrading:
frigate.db
fileshm_size
is too low then a warning will be printed in the log stating that it needs to be increased.record
config has been refactored to allow for direct control of how longalerts
anddetections
are retained. These values will be automatically populated from your current config, but you may want to adjust the values that are set after updating. See the updated docs here and ensure your updated config retains the footage you want it to.hwaccel
preset they are using (preset-vaapi
may need to now bepreset-intel-qsv-h264
orpreset-intel-qsv-h265
) if camera feeds are not functioning correctly after upgrading. This may need to be adjusted on a per-camera basis. If aqsv
preset is not working properly, you may still need to use apreset-vaapi
or revert to the previous ffmpeg version as described below.path: "5.0"
in yourffmpeg:
config entry. For example:ffmpeg
is no longer part of$PATH
. In most cases this is handled automatically.exec
streams will need to add the full path for ffmpeg which in most cases will be/usr/lib/ffmpeg/7.0/bin/ffmpeg
.bin: ffmpeg
defined, it needs to be removed.model
config under adetector
has been simplified to justmodel_path
as a string value. This change will be handled automatically through config migration.Explore
The new Explore pane in Frigate 0.15 makes it easy to explore every object tracked by Frigate. It offers a variety of filters and supports keyword and phrase-based text search, searching for similar images, and searching through descriptive text generated by AI models.
The default Explore pane shows a summary of your most recent tracked objects organized by label. Clicking the small arrow icon at the end of the list will bring you to an infinitely scrolling grid view. The grid view can also be set as the default by changing the view type from the Settings button in the top right corner of the pane.
The Explore pane also serves as the new way to submit images to Frigate+. Filters can be applied to only display tracked objects with snapshots that have not been submitted to Frigate+. The left/right arrow keys on the keyboard allow quick navigation between tracked object snapshots when looking at the Tracked Object Details pane from the grid view.
AI/ML Search
Frigate 0.15 introduces two powerful search features: Semantic Search and GenAI Search. Semantic Search can be enabled on its own, while GenAI Search works in addition to Semantic Search.
Semantic Search
Semantic Search uses a CLIP model to generate embeddings (numerical representations of images) for the thumbnails of your tracked objects, enabling searches based on text descriptions or visual similarity. This is all done locally.
For instance, if Frigate detects and tracks a car, you can use similarity search to see other instances where Frigate detected and tracked that same car. You can also quickly search your tracked objects using an "image caption" approach. Searching for "red car driving on a residential street" or "person in a blue shirt walking on the sidewalk at dawn" or even "a person wearing a black t-shirt with the word 'SPORT' on it" will produce some stunning results.
Semantic Search works by running an AI model locally on your system. Small or underpowered systems like a Raspberry Pi will not run Semantic Search reliably or at all. A dedicated GPU and 16GB of RAM is recommended for best performance.
See the Semantic Search docs for system requirements, setup instructions, and usage tips.
Generative AI
GenAI Search employs generative AI models to create descriptive text for the thumbnails of your tracked objects, which are stored in the Frigate database to enhance future searches. Supported providers include Google Gemini, Ollama, and OpenAI, so you can choose whether you want to send data to the cloud or use a locally hosted provider.
See the GenAI docs for setup instructions and use case suggestions.
Improved Tools for Debugging
Review Item Details Pane
A new Review Item Details pane can be viewed by clicking / tapping on the gray chip on a review item in the Review pane. This shows more information about the review item as well as thumbnails or snapshots for individual objects (if enabled). The pane also provides links to share the review item, download it, submit images to Frigate+, view object lifecycles, and more.
Object Lifecycle Pane
The Recordings Timeline from Frigate 0.13 has been improved upon and returns to 0.15 as the
Object Lifecycle
, viewable in the Review Details pane as well as the new Explore page. The new pane shows the significant moments during the object's lifecycle: when it was first seen, when it entered a zone, became stationary, etc. It also provides information about the object's area and size ratio to assist in configuring Frigate to tune out false positives.Native Notifications
Frigate now supports notifications using the WebPush protocol. This allows Frigate to deliver notifications to devices that have registered to receive notifications in the Frigate settings, delivering them in a timely and secure manner. Currently, notifications will be delivered for all review items marked as alerts. More options for native notifications will be supported in the future.
See the notifications docs.
New Object Detectors
ONNX
ONNX is an open model standard which allows for a single model format that can run on different types of GPUs. The
default
,tensorrt
, androcm
Frigate build variants include GPU support for efficient object detection via ONNX models, simplifying configuration and support more models. There are no default included ONNX models for object detection.AMD MiGraphX
Support has been added for AMD GPUs via ROCm and MiGraphX. Currently there is no default included model for this detector.
Hailo-8
Support has been added for the Hailo8 and Hailo-8L hardware for object detection on both arm64 and amd64 platforms.
Other UI Changes
Other Backend Changes
This discussion was created from the release 0.15.0 Release Candidate 1.
Beta Was this translation helpful? Give feedback.
All reactions