Neural Point-Based Graphics
Neural Point-Based Graphics
Kara-Ali Aliev1
Artem Sevastopolsky1,2
Maria Kolos1,2
Dmitry Ulyanov3
Victor Lempitsky1,2
1Samsung AI Center 1Skolkovo Institute of Science and Technology 3in3d.io
UPD (09.02.2021): added a Docker container which can be executed on a headless node. See Readme.
This is PyTorch implementation of Neural Point-Based Graphics (NPBG), a new method for realtime photo-realistic rendering of real scenes. NPBG uses a raw point cloud as the geometric representation of a scene, and augments each point with a learnable neural descriptor that encodes local geometry and appearance. A deep rendering network is learned in parallel with the descriptors, so that new views of the scene can be obtained by passing the rasterizations of a point cloud from new viewpoints through this network.
The following instructions describe installation of conda environment. If you wish to setup the Docker environment, see the Readme in the docker folder. This way is also recommended for headless machines (without X server enabled).
Run this command to install python environment using conda:
bash source scripts/install_deps.sh
You can render one of the fitted scenes we provide right away in the real-time viewer or fit your own scene.
Download fitted scenes and universal rendering network weights from here and unpack in the sources root directory.
We suppose that you have at least one GeForce GTX 1080 Ti for fitting and inference.
Here we show a couple of examples how to run fitted scenes in the viewer.
python viewer.py --config downloads/person_1.yaml --viewport 2000,1328 --origin-view
Since this scene was fitted on 4k images, we crop image size with
--viewportargument to fit the scene into memory of a modest GPU.
python viewer.py --config downloads/studio.yaml --rmode fly
Check
downloadsdirectory for more examples.
Fitting a new scene consists of two steps:
There is a bunch of software for point cloud reconstruction. While it is possible to adopt different software packages for our pipeline, we will choose Agisoft Metashape for this demonstration.
If you don't have a license for Agisoft Metashape Pro, start a trial version by filling in the form. On the first start, enter you license key.
Download and install Agisoft Metashape:
bash wget http://download.agisoft.com/metashape-pro_1_6_2_amd64.tar.gz tar xvf metashape-pro_1_6_2_amd64.tar.gz cd metashape-pro LD_LIBRARY_PATH="python/lib:$LD_LIBRARY_PATH" ./python/bin/python3.5 -m pip install pillow bash metashape.sh
Optionally, enable GPU acceleration by checking Tools -> Preferences -> GPU.
Depending on the specs of your PC you may need to downscale your images to proper size. We recommend using 4k images or less. For example, if you want to downscale images by a factor of two, run this command: ```bash
for fn in *jpg; do convert $fn -resize 50% $fn; done ```
Build point cloud:
bash bash metashape.sh -r /scripts/metashape_build_cloud.pywhere is the path to NPBG sources, is directory with
imagessubdirectory with your scene images.
The script will produce: *
point_cloud.ply: dense point cloud *
cameras.xml: camera registration data *
images_undistorted: undistorted images for descriptor fitting *
project.psz: Metashape project *
scene.yaml: scene configuration for the NPBG viewer
Make sure the point cloud has no severe misalignments and crop out unnecessary geometry to optimize memory consumption. To edit a scene, open
project.pszin Metashape GUI and export modified point cloud (File -> Export -> Export Points). See Issues section for further recommendations.
Now we can fit descriptors for this scene.
Modify
configs/paths_example.yamlby setting absolute paths to scene configuration file, target images and, optionally, masks. Add other scenes to this file if needed.
Fit the scene:
bash python train.py --config configs/train_example.yaml --pipeline npbg.pipelines.ogl.TexturePipeline --dataset_nameswhere is the name of the scene in
paths_example.yaml. Model checkpoints and Tensorboard logs will be stored in
data/logs.
The command above will finetune weights of the rendering network. This regime usually produces more appealing results. To freeze the rendering network, use option
--freeze_net. We provide pretrained weights for the rendering network on ScanNet and People dataset located in
downloads/weights. Set pretrained network using
net_ckptoption in
train_example.yaml.
If you have masks for target images, use option '--use_masks'. Make sure masks align with target images.
When the model converge (usually 10 epochs is enough), run the scene in the viewer:
python viewer.py --config .yaml --checkpoint data/logs//checkpoints/.pth --origin-view
where
.yamlis the scene configuration file created in the point cloud reconstruction stage,
--checkpointis the path to descriptors checkpoint and
--origin-viewoption automatically moves geometry origin to the world origin for convenient navigation. You can manually assign
model3d_originfield in
.yamlfor arbitrary origin transformation (see
downloads/person_1.yamlfor example).
Fitting novel scenes can sometimes be tricky, most often due to the preparation of camera poses that are provided in different ways by different sources, or sometimes because of the reconstruction issues (see below). We recommend checking out this and this issues for detailed explanations.
The most important insight is related to the configs structure. There is a system of 3 configs used in NPBG:
(there is another optional config -- inference config, which is essentially a scene config with
net_ckptand
texture_ckptparameters: paths to the network weights checkpoint and a descriptors checkpoint, respectively)
To fit a new scene, one should a scene config
configs/my_scene_name.yamland a path config
configs/my_scene_paths.yamlby setting absolute paths to scene configuration file, target images, and other optional parameters, such as masks. Path config can contain paths to images of either 1 scene or several scenes, if needed. Examples of all configs of all types can be found in the repository.
Since our repository is based on a custom, specific framework, we leave the following diagram with the basic code logic. For those who wish to extend our code with additional features or try out related ideas (which we would highly appreciate), this diagram should help finding where the changes should be applied in the code. At the same time, various technical intricacies are not shown here for the sake of clarity.
If you are using a smarthone with Android, OpenCamera may come handy. A good starting point for settings is f/8, ISO 300, shutter speed 1/125s. iPhone users are recommended to fix exposure in the Camera. Follow this guide for more recommendations. * Viewer performance. If Pytorch and X server run on different GPUs there will be extra data transfer overhead between two GPUs. If higher framerate is desirable, make sure they run on the same GPU (use
CUDA_VISIBLE_DEVICES). * Pytorch crash on train. there is a known issue when Pytorch crashes on backward pass if there are different GPUs, f.e. GeForce GTX 1080 Ti and GeForce RTX 2080 Ti. Use
CUDA_VISIBLE_DEVICESto mask GPU.
This is what we want to implement as well. We would also highly appreciate the help from the community.
colmap_build_cloud.pyscript working in the same manner as
metashape_build_cloud.py.
@article{Аliev2020, title={Neural Point-Based Graphics}, author={Kara-Ali Aliev and Artem Sevastopolsky and Maria Kolos and Dmitry Ulyanov and Victor Lempitsky}, year={2020}, eprint={1906.08240v3}, archivePrefix={arXiv}, primaryClass={cs.CV} }