Post processing video magic with ZoneMinder: find missing objects, blend multiple events, annotate videos, and more
zmMagik will be a list of growing foo-magic things you can do with video images that ZM stores. probably...
I came home one day to see my trash can cover went missing. I thought it would be fun to write a tool that could search through my events to let me know when it went missing. Yep, it started with trash talking
Andy posted an example of how other vendors blend multiple videos to give a common view quickly. I thought it would be fun to try
One thing leads to another and I keep doing new things to learn new things..
As of today, it lets you:
this video is blended from 2 days worth of video. Generated using
python3 ./magik.py -c config.ini --monitors 11 --blend --display --download=False --from "2 days ago"
generated using
python3 ./magik.py -c config.ini --eventid 44063 --dumpjson --annotate --display --download=False --onlyrelevant=False --skipframes=1
generated using
python3 ./magik.py -c config.ini --find trash.jpg --dumpjson --display --download=False --from "8am" --to "3pm" --monitors 11
# needs python3, so you may need to use pip3 if you have 2.x as well git clone https://github.com/pliablepixels/zmMagik cd zmMagik # you may need to do sudo -H pip3 instead for below, if you get permission errors pip3 install -r requirements.txt
Note that this package also needs OpenCV which is not installed by the above step by default. This is because you may have a GPU and may want to use GPU support. If not, pip is fine. See this page on how to install OpenCV
If you are using yolo extraction, you also need these files and make sure your config variables point to them
wget https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3.cfg wget https://pjreddie.com/media/files/yolov3.weights wget https://raw.githubusercontent.com/pjreddie/darknet/master/data/coco.names
General note: do a
python3 ./zmMagik -hto see all options. Remember, you can stuff in regularly used options in a config file and override on CLI as you need. Much easier.
python3 ./magik.py --monitors=11 --from "yesterday, 7am" --to "today, 10am" --blend -c config.ini
python3 ./magik.py --monitors=7 --present=False --from "today, 7am" --to "today, 7pm" --find "amazonpackage.jpg" -c config.ini
Note that
amazonpackage.jpgneeds to be the same dimensions/orientation as in the events it will look into. Best way to do this is to load up ZM, find the package, crop it and use it.
See GPU section below
This is the default mode. It uses the very fast openCV background subtraction to detect motion, and then uses YOLO to refine the search to see if it really is an object worth marking. Use this mode by default, unless you need more speed, in which case, use "backround_extraction"
Yes, that's why you should use "mixed" Some tips: * Use masks to restrict area * Use
--displayto see what is going on, look at frame masks and output * Try changing the learning rate of the background extractor * See if using a different Background extractor for
fgbgin
globals.pyhelps you (read this) * Fiddle with kernelclean and kernelfill in
globals.py
Yes, unlike "background_extraction" yolo doesn't report a mask of the object shape, only a bounding box/ I'll eventually add masked R-CNN too, you can try that (will be slower than Yolo) Maybe you can suggest a smarter way to overlay the rectangle using some fancy operators that will act like its blending?
finddoesn't find my image
Congratulations, maybe no one stole your amazon package * Make sure image you are looking for is not rotated/resized/etc. needs to be original dimensions
Only the DNN object detection part (Yolo). magik uses various image functions like background extraction, merging, resizes etc. that are not done in the GPU. As of today, the OpenCV python CUDA wrapper documentation is dismal and unlike what many people think, where its just a
.cudadifference in API calls, the flow/parameters also differ. See this for example. I may eventually get to wrapping all the other non DNN parts into their CUDA equivalent functions, but right now, I'm disinclined to do it. I'll be happy to accept PRs.
As of Feb 2020, OpenCV 4.2 is released, which supports CUDA for DNN. You have two options:
(RECOMMENDED) Compile and install OpenCV 4.2+ and enable CUDA support. See this guide on how to do that.
(LEGACY) Or, compile darknet directly with GPU support. If you go this route, I'd suggest you build the darknet fork maintained by AlexyAB as it is better maintained.
You'd only want to compile darknet directly if your GPU/CUDA version is not compatible with OpenCV 4.2. For all other cases, go with OpenCV 4.2 (don't install from a pip package, GPU is not enabled)
Simply put: * Either compile OpenCV 4.2+ from source correctly or go the direct darknet route as described before * Make sure it is actually using GPU * then set
gpu=Trueand either specify
use_opencv_dnn_cuato
Trueor set
darknet_lib=
The YoloV3 model config I use takes up 1.6GB of GPU memory. Note that I use a reduced footprint yolo config. I have 4GB of GPU memory, so the default
yolov3.cfgdid not work and ate up all my memory. This is my modified
yolov3.cfgsection to make it work:
[net] batch=1 subdivisions=1 width=416 height=416
Here is a practical comparison. I ran a blend operation on my driveway camera (modect) for a full day's worth of alarmed events. I used 'mixed' mode, which first used openCV background subtraction and then YOLO if the first mode found anything. This was to be fair to the CPU stats when compared. It grabbed a total of 27 video events: ``` python3 ./magik.py --blend --from "1 day ago" --monitors 8 -c ./config.ini --gpu=True --alarmonly=True --skipframes=1
Total time: 250.72s ```
I then ran it without GPU: (Note that I have
libopenblas-dev liblapack-dev libblas-devconfigured with OpenCV to improve CPU performance a lot) ``` python3 ./magik.py --blend --from "1 day ago" --monitors 8 -c ./config.ini --gpu=False --alarmonly=True --skipframes=1
Total time: 1234.77s ```
Thats a 5x improvement
resize
skipframes
alarmonly=True
EVENT_CLOSE_MODEto "alarm". That will create a new event when an alarm occurs, and close it when the alarm closes. That will help you speed things up a lot