Virtual Video Device for Background Replacement with Deep Semantic Segmentation
In these modern times where everyone is sitting at home and skype-ing/zoom-ing/webrtc-ing all the time, I was a bit annoyed about always showing my messy home office to the world. Skype has a "blur background" feature, but that starts to get boring after a while (and it's less private than I would personally like). Zoom has some background substitution thingy built-in, but I'm not touching that software with a bargepole (and that feature is not available on Linux anyway). So I decided to look into how to roll my own implementation without being dependent on any particular video conferencing software to support this.
This whole shebang involves three main steps with varying difficulty: - find person in video (hard) - replace background (easy) - pipe data to virtual video device (medium)
I've been working a lot with depth cameras previously, also for background segmentation (see SurfaceStreams), so I just grabbed a leftover RealSense camera from the lab and gave it a shot. However, the depth data in a cluttered office environment is quite noisy, and no matter how I tweaked the camera settings, it could not produce any depth data for my hair...? I looked like a medieval monk who had the top of his head chopped off, so ... next.
See https://docs.opencv.org/3.4/d1/dc5/tutorialbackgroundsubtraction.html for tutorial. Should work OK for mostly static backgrounds and small moving objects, but does not work for a mostly static person in front of a static background. Next.
See https://docs.opencv.org/3.4/db/d28/tutorialcascadeclassifier.html for tutorial. Works okay-ish, but obviously only detects the face, and not the rest of the person. Also, only roughly matches an ellipse which is looking rater weird in the end. Next.
I've heard good things about this deep learning stuff, so let's try that. I first had to find my way through a pile of frameworks (Keras, Tensorflow, PyTorch, etc.), but after I found a ready-made model for semantic segmentation based on Tensorflow Lite (DeepLab v3+), I settled on that.
I had a look at the corresponding Python example, C++ example, and Android example, and based on those, I first cobbled together a Python demo. That was running at about 2.5 FPS, which is really excruciatingly slow, so I built a C++ version which manages 10 FPS without too much hand optimization. Good enough.
I've also tested a TFLite-converted version of the Body-Pix model, but the results haven't been much different to DeepLab for this use case.
More recently, Google has released a model specifically trained for person segmentation that's used in Google Meet. This has way better performance than DeepLab, both in terms of speed and of accuracy, so this is now the default. It needs one custom op from the MediaPipe framework, but that was quite easy to integrate. Thanks to @jiangjianping for pointing this out in the corresponding issue.
This is basically one line of code with OpenCV:
bg.copyTo(raw,mask);Told you that's the easy part.
I'm using v4l2loopback to pipe the data from my userspace tool into any software that can open a V4L2 device. This isn't too hard because of the nice examples, but there are some catches, most notably color space. It took quite some trial and error to find a common pixel format that's accepted by Firefox, Skype, and guvcview, and that is YUYV. Nicely enough, my webcam can output YUYV directly as raw data, so that does save me some colorspace conversions.
The dataflow through the whole program is roughly as follows:
write()data to virtual video device
(*) these are required input parameters for this model
Tested with the following dependencies:
Tested with the following software:
Install dependencies (
sudo apt install libopencv-dev build-essential v4l2loopback-dkms curl).
maketo build everything (should also clone and build Tensorflow Lite).
If the first part doesn't work: - Clone https://github.com/tensorflow/tensorflow/ repo into
tensorflow/folder - Checkout tag v2.4.0 - run ./tensorflow/lite/tools/make/downloaddependencies.sh - run ./tensorflow/lite/tools/make/buildlib.sh
First, load the v4l2loopback module (extra settings needed to make Chrome work):
sudo modprobe v4l2loopback devices=1 max_buffers=2 exclusive_caps=1 card_label="VirtualCam"Then, run deepseg (-d -d for full debug, -c for capture device, -v for virtual device):
./deepseg -d -d -c /dev/video0 -v /dev/video1
As usual: pull requests welcome. - The project name isn't catchy enough. Help me find a nice backronym. - Resolution is currently hardcoded to 640x480 (lowest common denominator). - Only works with Linux, because that's what I use. - Needs a webcam that can produce raw YUYV data (but extending to the common YUV420 format should be trivial)
Firefox preferred formats: https://dxr.mozilla.org/mozilla-central/source/media/webrtc/trunk/webrtc/modules/videocapture/linux/videocapture_linux.cc#142-159