A highly extensible software stack to empower everyone to build practical real-world live video analytics applications for object detection and counting with cutting edge machine learning algorithms.
A highly extensible software stack to empower everyone to build practical real-world live video analytics applications for object detection and counting/alerting with cutting edge machine learning algorithms. The repository features a hybrid edge-cloud video analytics pipeline (built on C# .NET Core), which allows TensorFlow DNN model plug-in, GPU/FPGA acceleration, docker containerization/Kubernetes orchestration, and interactive querying for after-the-fact analysis. A brief summary of Rocket platform can be found inside :memo:Rocket-features-and-pipelines.pdf.
Feel free to check out our :memo:webinar on Rocket from Dec 2019.
Microsoft Visual Studio (VS 2017 is preferred) is recommended IDE for Rocket on Windows 10. While installing Visual Studio, please also add C++ 2015.3 v14.00 (v140) toolset to your local machine. Snapshot below shows how to include C++ 2015.3 v14.00 from Visual Studio Installer.
Follow instructions to install .NET Core 2.2 (2.2.102 is preferred).
CUDA 8.0 (e.g., cuda8.0.61win10_network.exe) is needed for Darknet (e.g., YOLO) models.
After installation, please make sure files in
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\extras\visual_studio_integration\MSBuildExtensionsare copied to
C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\VC\VCTargets\BuildCustomizations
CUDA 9.1 (e.g., cuda9.1.85win10_network.exe) is needed to support TensorFlow models.
cuDNN v7 is preferred (e.g., cudnn-8.0-windows10-x64-v184.108.40.206.zip).
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin.
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\include.
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\lib\x64.
CUDA_PATHwith Variable Value:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0into Environment Variables.
Restart your computer after installing CUDA and cuDNN.
Docker is recommended to run Rocket on Linux. Below we use Ubuntu 16.04 as an example to walk through the steps of building Rocket docker image and run it with GPU acceleration.
Check out the repository.
rtsp://:/) or local video files (should be put into
\media\). A sample video file
sample.mp4is already included in
\cfg\) used in line-based counting/alerting and cascaded DNN calls. Each line in the file defines a line-of-interest with the format below.
sample.txtmanually created based on
sample.mp4is also included in the folder
Config.batbefore the first time you run Rocket to download pre-compiled OpenCV and TensorFlow binaries as well as Darknet YOLO weights files. It may take few minutes depending on your network status. Proceed only when all downloads finish. YOLOv3 and Tiny YOLOv3 are already included in Rocket. You can plug-in other YOLO models as you wish.
src\VAP\from Visual Studio.
Set pipeline config
PplConfigin VideoPipelineCore - App.config. We have pre-compiled six configurations in the code. Pipeline descriptions are also included in :memo:Rocket-features-and-pipelines.pdf.
(Optional) Set up your own database and Azure Machine Learning service if
PplConfigis set to 4 or 5.
App.Config. Rocker will handle the communication between local modules and the cloud service.
Build the solution.
Run the code.
. To run Rocket on the sample video, for example, arguments can be set to
sample.mp4 sample.txt 1 1 car.
\src\VAP\VideoPipelineCore\bin\Debug\netcoreapp2.2. For instance,
dotnet .\VideoPipelineCore.dll sample.mp4 sample.txt 1 1 car.
We have pre-built a Rocket docker image from docker branch with local processing only (slide #12 without cloud parts). The image is hosted on Docker Hub, a public library and community for container images, and you will be asked to login before pull/push images (sign up first if you don't have an account).
To test on the pre-built Rocket image, run
docker pull ycshu086/rocket-sample-edgeonly:0.1
Once pulled, run the command below to start Rocket with NVIDIA GPU.
docker run --runtime=nvidia -v :/app/output ycshu086/rocket-sample-edgeonly:0.1 sample.mp4 sample.txt 1 1 car
Build your own Rocket pipeline on Linux
docker pull ycshu086/ubuntu-dotnetcore-opencv-opencvsharp-cuda-cudnn:.
\cfg. If you are running Rocket on a pre-recorded video, please also copy the video file into
\src\VAP\VideoPipelineCore\App.Configto set proper parameters for database and Azure Machine Learning service connection.
sudo chmod 744 Config.shand
sudo ./Config.shbefore the first time you build Rocket image to download pre-compiled TensorFlow binaries.
docker buildto build Rocket image using
docker build -t /: -f Dockerfile.VAP .
docker push -t /:
Run Rocket image on Linux
docker imagesto check existing images.
docker pull -t /:
docker run --runtime=nvidia -v :/app/output /: sample.mp4 sample.txt 1 1 car
Output images are generated in folders in
\src\VAP\VideoPipelineCore\bin\(Windows), or the local directory you mount during
docker runon Linux. Results from different modules are sent to different directories (e.g.,
output_bgslinefor background subtraction-based detector) whereas
output_allhas images from all modules. Name of each file consists of frame ID, module name, and confidence score. Below are few sample results from running pipeline 3 and pipeline 5 on
sample.mp4. You should also see results printed in console during running. The above illustration on pipeline 3 shows that at frame 2679, background subtraction detected an object, tiny Yolo DNN confirmed it was a car with a confidence of 0.24, and heavy Yolo v3 confirmed it with a confidence of 0.92. Likewise, for pipeline 5 where the TensorFlow FastRNN model had a confidence of 0.55 and AzureML (in the cloud) came back with a confidence of 0.76 for the same object.