Virtual Video Device for Background Replacement with Deep Semantic Segmentation
(or The Project Formerly Known As DeepBackSub)
backscrub is licensed under the Apache License 2.0. See LICENSE file for details.
Install dependencies (
sudo apt install libopencv-dev build-essential v4l2loopback-dkms curl).
Clone this repository with
git clone --recursive https://github.com/floe/backscrub.git. To speed up the checkout you can additionally pass
git clone. This is okay, if you only want to download and build the code, however, for development it is not recommended.
cmaketo build the project: create a subfolder (e.g.
build), change to that folder and run:
cmake .. && make -j $(nproc || echo 4).
Deprecated: Another option to build everything is to run
makein the root directory of the repository. While this will download and build all dependencies, it comes with a few drawbacks like missing support for XNNPACK. Also this might break with newer versions of Tensorflow Lite as upstream support for this option has been removed. Use at you own risk.
First, load the v4l2loopback module (extra settings needed to make Chrome work):
sudo modprobe v4l2loopback devices=1 max_buffers=2 exclusive_caps=1 card_label="VirtualCam" video_nr=10Then, run backscrub (-d -d for full debug, -c for capture device, -v for virtual device, -b for wallpaper):
./backscrub -d -d -c /dev/video0 -v /dev/video10 -b ~/wallpapers/forest.jpg
Some cameras (like e.g.
Logitec Brio) need to switch the video source to
-f MJPGin order for higher resolutions to become available for use.
For regular usage, setup a configuration file
options v4l2loopback maxbuffers=2 options v4l2loopback exclusivecaps=1 options v4l2loopback videonr=10 options v4l2loopback cardlabel="VirtualCam"
To auto-load the driver on startup, create `/etc/modules-load.d/v4l2loopback.conf` with the following content:v4l2loopback ```
Tested with the following dependencies:
Tested with the following software:
In these modern times where everyone is sitting at home and skype-ing/zoom-ing/webrtc-ing all the time, I was a bit annoyed about always showing my messy home office to the world. Skype has a "blur background" feature, but that starts to get boring after a while (and it's less private than I would personally like). Zoom has some background substitution thingy built-in, but I'm not touching that software with a bargepole (and that feature is not available on Linux anyway). So I decided to look into how to roll my own implementation without being dependent on any particular video conferencing software to support this.
This whole shebang involves three main steps with varying difficulty: - find person in video (hard) - replace background (easy) - pipe data to virtual video device (medium)
I've been working a lot with depth cameras previously, also for background segmentation (see SurfaceStreams), so I just grabbed a leftover RealSense camera from the lab and gave it a shot. However, the depth data in a cluttered office environment is quite noisy, and no matter how I tweaked the camera settings, it could not produce any depth data for my hair...? I looked like a medieval monk who had the top of his head chopped off, so ... next.
See https://docs.opencv.org/3.4/d1/dc5/tutorialbackgroundsubtraction.html for tutorial. Should work OK for mostly static backgrounds and small moving objects, but does not work for a mostly static person in front of a static background. Next.
See https://docs.opencv.org/3.4/db/d28/tutorialcascadeclassifier.html for tutorial. Works okay-ish, but obviously only detects the face, and not the rest of the person. Also, only roughly matches an ellipse which is looking rater weird in the end. Next.
I've heard good things about this deep learning stuff, so let's try that. I first had to find my way through a pile of frameworks (Keras, Tensorflow, PyTorch, etc.), but after I found a ready-made model for semantic segmentation based on Tensorflow Lite (DeepLab v3+), I settled on that.
I had a look at the corresponding Python example, C++ example, and Android example, and based on those, I first cobbled together a Python demo. That was running at about 2.5 FPS, which is really excruciatingly slow, so I built a C++ version which manages 10 FPS without too much hand optimization. Good enough.
I've also tested a TFLite-converted version of the Body-Pix model, but the results haven't been much different to DeepLab for this use case.
More recently, Google has released a model specifically trained for person segmentation that's used in Google Meet. This has way better performance than DeepLab, both in terms of speed and of accuracy, so this is now the default. It needs one custom op from the MediaPipe framework, but that was quite easy to integrate. Thanks to @jiangjianping for pointing this out in the corresponding issue.
This is basically one line of code with OpenCV:
bg.copyTo(raw,mask);Told you that's the easy part.
I'm using v4l2loopback to pipe the data from my userspace tool into any software that can open a V4L2 device. This isn't too hard because of the nice examples, but there are some catches, most notably color space. It took quite some trial and error to find a common pixel format that's accepted by Firefox, Skype, and guvcview, and that is YUYV. Nicely enough, my webcam can output YUYV directly as raw data, so that does save me some colorspace conversions.
The dataflow through the whole program is roughly as follows:
write()data to virtual video device
(*) these are required input parameters for this model
As usual: pull requests welcome.
Firefox preferred formats: https://searchfox.org/mozilla-central/source/thirdparty/libwebrtc/webrtc/modules/videocapture/linux/videocapturelinux.cc#142-159
We have been notified that some snap packaged versions of
obs-studioare unable to detect/use a virtual camera as provided by
backscrub. Please check the details for workarounds if this applies to you.