|
6 年之前 | |
---|---|---|
config | 6 年之前 | |
frigate | 6 年之前 | |
.dockerignore | 6 年之前 | |
.gitignore | 6 年之前 | |
Dockerfile | 6 年之前 | |
LICENSE | 6 年之前 | |
README.md | 6 年之前 | |
detect_objects.py | 6 年之前 |
This results in a MJPEG stream with objects identified that has a lower latency than directly viewing the RTSP feed with VLC.
Build the container with
docker build -t frigate .
Download a model from the zoo.
Download the cooresponding label map from here.
Run the container with
docker run --rm \
-v <path_to_frozen_detection_graph.pb>:/frozen_inference_graph.pb:ro \
-v <path_to_labelmap.pbtext>:/label_map.pbtext:ro \
-p 5000:5000 \
-e RTSP_URL='<rtsp_url>' \
-e REGIONS='<box_size_1>,<x_offset_1>,<y_offset_1>,<min_person_size_1>,<min_motion_size_1>,<mask_file_1>:<box_size_2>,<x_offset_2>,<y_offset_2>,<min_person_size_2>,<min_motion_size_2>,<mask_file_2>' \
-e MQTT_HOST='your.mqtthost.com' \
-e MQTT_TOPIC_PREFIX='cameras/1' \
-e DEBUG='0' \
frigate:latest
Example compose:
frigate:
container_name: frigate
restart: unless-stopped
image: frigate:latest
volumes:
- <path_to_frozen_detection_graph.pb>:/frozen_inference_graph.pb:ro
- <path_to_labelmap.pbtext>:/label_map.pbtext:ro
- <path_to_config>:/config
ports:
- "127.0.0.1:5000:5000"
environment:
RTSP_URL: "<rtsp_url>"
REGIONS: "<box_size_1>,<x_offset_1>,<y_offset_1>,<min_person_size_1>,<min_motion_size_1>,<mask_file_1>:<box_size_2>,<x_offset_2>,<y_offset_2>,<min_person_size_2>,<min_motion_size_2>,<mask_file_2>"
MQTT_HOST: "your.mqtthost.com"
MQTT_TOPIC_PREFIX: "cameras/1"
DEBUG: "0"
Access the mjpeg stream at http://localhost:5000
https://www.tensorflow.org/install/source#docker_linux_builds
used tensorflow/tensorflow:1.12.0-devel-py3
docker run -it -v ${PWD}:/lab -v ${PWD}/../back_camera_model/models/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb:/frozen_inference_graph.pb:ro tensorflow/tensorflow:1.12.0-devel-py3 bash
bazel build tensorflow/tools/graph_transforms:transform_graph
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=/frozen_inference_graph.pb \
--out_graph=/lab/optimized_inception_graph.pb \
--inputs='image_tensor' \
--outputs='num_detections,detection_scores,detection_boxes,detection_classes' \
--transforms='
strip_unused_nodes(type=float, shape="1,300,300,3")
remove_nodes(op=Identity, op=CheckNumerics)
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms'