Distributed and Graph-based Structure from Motion
If you use this project for your research, please cite:
@article{article, author = {Chen, Yu and Shen, Shuhan and Chen, Yisong and Wang, Guoping}, year = {2020}, month = {07}, pages = {107537}, title = {Graph-Based Parallel Large Scale Structure from Motion}, journal = {Pattern Recognition}, doi = {10.1016/j.patcog.2020.107537} }
@inproceedings{schoenberger2016sfm, author={Sch\"{o}nberger, Johannes Lutz and Frahm, Jan-Michael}, title={Structure-from-Motion Revisited}, booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2016}, }
If you use the RBR based Lagrangian rotation averaging solver for your research, please cite ``` @inproceedings{DBLP:conf/cvpr/ErikssonOKC18, author = {Anders P. Eriksson and Carl Olsson and Fredrik Kahl and Tat{-}Jun Chin}, title = {Rotation Averaging and Strong Duality}, booktitle = {{IEEE} Conference on Computer Vision and Pattern Recognition}, pages = {127--135}, year = {2018}, }
@article{EriksonOKC20, author = {Eriksson, Anders and Olsson, Carl and Kahl, Fredrik and Chin, Tat-Jun}, title = {Rotation Averaging with the Chordal Distance: Global Minimizers and Strong Duality}, journal = {{IEEE} Trans. Pattern Anal. Mach. Intell.}, year = {2020}, }
@TECHREPORT{Wen09rowby, author = {Zaiwen Wen and Donald Goldfarb and Shiqian Ma and Katya Scheinberg}, title = {Row by row methods for semidefinite programming }, institution = {}, year = {2009} } ```
sudo apt-get install \ git \ cmake \ build-essential \ libboost-program-options-dev \ libboost-filesystem-dev \ libboost-graph-dev \ libboost-regex-dev \ libboost-system-dev \ libboost-test-dev \ libeigen3-dev \ libsuitesparse-dev \ libfreeimage-dev \ libgoogle-glog-dev \ libgflags-dev \ libglew-dev \ qtbase5-dev \ libqt5opengl5-dev \ libcgal-dev \ libcgal-qt5-dev
sudo apt-get install libatlas-base-dev libsuitesparse-dev git clone https://ceres-solver.googlesource.com/ceres-solver cd ceres-solver git checkout $(git describe --tags) # Checkout the latest release mkdir build cd build cmake .. -DBUILD_TESTING=OFF -DBUILD_EXAMPLES=OFF make sudo make install
igraph is used for
Community Detectionand graph visualization.
sudo apt-get install build-essential libxml2-dev wget https://igraph.org/nightly/get/c/igraph-0.7.1.tar.gz tar -xvf igraph-0.7.1.tar.gz cd igraph-0.7.1 ./configure make make check sudo make install
rpclib is a light-weight Remote Procedure Call (RPC) library. Other RPC libs, such as GRPC, etc, are not chosen by this project for flexibility and convinience.
git clone https://github.com/qchateau/rpclib.git cd rpclib mkdir build && cd build cmake .. make -j8 sudo make install
This module is used for similarity seaching, while needs more evaluation. ```sh sudo pip install scikit-learn tensorflow-gpu==1.7.0 scipy numpy progressbar2
### 2.3 Build DAGSfM```sh git clone https://github.com/AIBluefisher/DAGSfM.git cd DAGSfM mkdir build && cd build cmake .. && make -j8
As our algorithm is not integrated in the
GUIof
COLMAP, the scripts to run the distributed SfM are given (We hope there is anyone that is interested in integrating this pipeline into the GUI):
sudo chmod +x scripts/shell/distributed_sfm.sh ./distributed_sfm.sh $image_dir $num_images_ub $log_folder $completeness_ratio
$image_dir: The directory that stores images
$num_images_ub: The maximum image number in each cluster. For example,
80~120.
$log_folder: The directory that stores the logs
$completeness_ratio: The ratio that measure the repeatitive rate of adjacent clusters.
(1) At first, we need to establish the server for every worker:
sh cd build/src/exe ./colmap local_sfm_worker --output_path=$output_path --port=$your_portThe RPC server establishes on local worker would be listening on the given port, and keep waitting until master assigns a job. We can also establish multiple workers on one machine, but to notice that port number should be unique!
(2) Then, the ip and port for every server should be written in a
config.txtfile. The file format should follow:
txt server_num ip1 port1 image_path1 ip2 port2 image_path2 ... ...*note: imagepath of each worker must be consistent with the `--outputpath` option.*
(3) At last, start our master ```sh cd GraphSfM_PATH/scripts/shell
DATASETPATH=/path/to/project CONFIGFILEPATH=/pathtoconfigfile numimagesub=100 logfolder=/pathtologdir
./distributedsfm sh $DATASETPATH $numimagesub $logfolder $CONFIGFILE_PATH ```
The
distributed_sfm.shactually executes the following command in SfM module: ```sh /home/chenyu/Projects/Disco/build/src/exe/colmap distributedmapper \ $DATASETPATH/$logfolder \ --databasepath=$DATASETPATH/database.db \ --imagepath=$DATASETPATH/images \ --outputpath=$DATASETPATH/$logfolder \ --configfilename=$CONFIGFILEPATH/config.txt \ --numworkers=8 \ --distributed=1 \ --repartition=0 \ --numimages=100 \ --scriptpath=/home/chenyu/Projects/Disco/scripts/shell/similaritysearch.sh \ --datasetpath=$DATASETPATH \ --outputdir=$DATASETPATH/$logfolder \ --mirrorpath=/home/chenyu/Projects/Disco/lib/mirror \ --assignclusterid=0 \ --writebinary=1 \ --retriangulate=0 \ --finalba=1 \ --selecttracksforbundleadjustment=1 \ --longtracklengththreshold=10 \ --graphdir=$DATASETPATH/$logfolder \ --numimagesub=$numimagesub \ --completenessratio=0.7 \ --relaxratio=1.3 \ --clustertype=SPECTRA #SPECTRA #NCUT COMMUNITYDETECTION #
Thus, you need to overwrite `/home/chenyu/Projects/Disco/build/src/exe/colmap`, `--script_path`, `--mirror_path` options by yours.The parameters need to be reset for different purpose:
--transfer_images_to_server
: The option decides whether to transfer images that are stored on
master's disk to workers' disks. If we want to execute a further MVS process, we want this option to be set to 1
, because each worker that execute MVS needs to access the raw images.
--distributed
: This option decides the SfM module runs in distributed mode or sequential mode.
For example, if we just have one computer, we should set it to 0
, then SfM would run in sequential mode and allows you to reconstruct large scale images on a single computer. If we set it to 1
, we must ensure the --config_file_name
option is valid, so that we we run SfM among
a lot of computers, which in a really distributed mode.
assign_cluster_id
: We use this option to indicate the program to assign each image with a
cluster_id
, if we divide images into several clusters. This option allows us to render different
image poses that are clustered in different clusters by different colors.
write_binary
: This option indicates to save the SfM results in text format or in binary format.
final_ba
: This option indicates whether to perform a final bundle adjustment after merging all
local maps. As a very large scale map requires much time to optimize scene structures and camera poses, users should tune this option by their need.
select_tracks_for_bundle_adjustment
: As the final bundle adjustment requires too much time, we can select good tracks to optimize and achieves a comparable accuracy as full bundle adjustment.
long_track_length_threshold
: The maximum track length when selects good tracks for bundle adjustment.
num_images_ub
: The maximum number of images in each cluster.
completeness_ratio
: This option indicate the overlapping between clusters. 0.5~0.7 is enough
in practice.
cluster_type
: This option decides which cluster method we choose for image clustering. We support NCut
and Spectral
Clustering. Spectra
clustering is more accurate than NCut
but it might be slower if we want to divide images into many clusters, as it needs much time to compute
eigen vectors.
If succeed, camera poses and sparse points should be included in $DATASET/sparse
folder, you can use COLMAP's GUI to
import it and show the visual result:
```sh ./build/src/exe/colmap gui
For small scale reconstruction, you can set the
$num_images_ubequal to the number of images, the program would just use the incremental SfM pipeline of COLMAP.
For large scale reconstruction, our
GraphSfMis highly recommended, these parameters should be tuned carefully: larger
$num_images_uband
$completeness_ratiocan make reconstruction more robust, but also may lead to low efficiency and even degenerate to incremental one.
In some cases where we have a very large scale map, such that a latter Multi-View Stereo becomes infeasible because of memory limitation. We can use the
point_cloud_segmenterto segment original map that is stored in colmap format into multiple small maps.
sh ./build/src/exe/colmap point_cloud_segmenter \ --colmap_data_path=path_to_colmap_data \ --output_path=path_to_store_small_maps \ --max_image_num=max_number_image_for_small_map \ --write_binary=1
Before running this command, make sure
path_to_colmap_datacontains
images.txt,
cameras.txt,
points3D.txtor
images.bin,
cameras.bin,
points3D.bin. -
max_image_num: As colmap data includes images data, where store all registered images' data. We limit the image number of each small map, and use this parameter to segment large maps. Though it's better to use the number of point clouds in practice, we haven't release the related implementation and we will enhance this helper further.
: set to1
if save colmap data in binary format, or set to0` to save colmap data in text format.
2020.12.05
2020.06.24
Image Graph,
Similarity Graph,
View Graphto handle different distributed tasks.
2020.04.11
2020.04.10
2020.03.04
2020.01.15
rpclibfor Remote Procedure Call(RPC). The distributed implementation follows the Map-Reduce architecture.
2020.01.10
2019.11.26