Need help with awesome-depth?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

269 Stars 35 Forks 29 Commits 0 Opened issues


A curated list of publication for depth estimation

Services available


Need anything else?

Contributors list

# 419,268
15 commits
# 586,953
4 commits
# 786,603
4 commits
# 757,052
1 commit
1 commit


A curated list of publication for depth estimation

0. Survey

[1] Dijk et al, How do neural networks see depth in single images? PDF

[2] Amlaan et al, Monocluar depth estimation: A survey, PDF

[3] Zhao et al, Monocular Depth Estimation Based On Deep Learning: An Overview, PDF

1. Monocular Depth (Fully Supervised)

[1] Eigen et al, Depth Map Prediction from a Single Image using a Multi-Scale Deep Network, NIPS 2014, Web

[2] Eigen et al, Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture, ICCV 2015, Web

[3] Laina et al, Deeper Depth Prediction with Fully Convolutional Residual Networks, 3DV 2016, Code

[4] Chen et al, Single-Image Depth Perception in the Wild, NIPS 2016, Web

[5] Li et al, A Two-Streamed Network for Estimating Fine-Scaled Depth Maps from Single RGB Images, ICCV 2017, PDF

[6] Xu et al, Structured Attention Guided Convolutional Neural Fields for Monocular Depth Estimation, CVPR 2018, PDF

[7] Xu et al, PAD-Net: Multi-Tasks Guided Prediction-and-Distillation Network, CVPR 2018, PDF

[8] Qi et al, GeoNet: Geometric Neural Network for Joint Depth and Surface Normal Estimation, CVPR 2018, PDF

[9] Fu et al, Deep Ordinal Regression Network for Monocular Depth Estimation, CVPR 2018, PDF

[10] Zhang et al, Joint Task-Recursive Learning for Semantic Segmentation and Depth Estimation, ECCV 2018, PDF

[11] Jiao et al, Look Deeper into Depth: Monocular Depth Estimation with Semantic Booster and Attention-Driven Loss, ECCV 2018, PDF

[12] Cheng et al, Depth Esimation via Affinity Learning with Convolutional Spatial Propagation Network, ECCV 2018, PDF, Code

[13] Lee et al, Monocular depth estimation using relative depth maps, CVPR 2019, PDF, Code

[14] Lee et al, From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation, Arxiv, PDF, Code

[15] Zhang et al, Pattern-Affinitive Propagation across Depth, Surface Normal and Semantic Segmentation, CVPR 2019, PDF

[16] Zhang et al, Exploiting temporal consistency for real-time video depth estimation, ICCV 2019, PDF, Code

2. Monocular Depth (Semi- / Un-Supervised)

2.1 Stereo Consistency

[1] Garg et al, Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue, ECCV 2016, Code

[2] Godard et al, Unsupervised Monocular Depth Estimation with Left-Right Consistency, CVPR 2017, Web

[3] Kuznietsov et al, Semi-Supervised Deep Learning for Monocular Depth Map Prediction, CVPR 2017, Code

[4] Luo et al, Single View Stereo Matching, CVPR 2018, Code

[5] Godard et al, Digging Into Self-Supervised Monocular Depth Estimation, aXiv 2018, PDF

[6] Lai et al, Bridging Stereo Matching and Optical Flow via Spatio temporal, CVPR 2019, PDF, Code

[7] Tosi et al, Learning monocular depth estimation infusing traditional stereo knowledge, CVPR 2019, PDF Code

[8] Garg et al, Learning Single Camera Depth Estimation using Dual-Pixels, ICCV 2019, PDF Code

[9] Zhang et al, Du2Net: Learning Depth Estimation from Dual-Cameras and Dual-Pixels, arXiv 2020, PDF

2.2 Multi View

[1] Zhou et al, Unsupervised Learning of Depth and Ego-Motion from Video, CVPR 2017, Web

[2] Im et al, Robust Depth Estimation from Auto Bracketed Images, CVPR 2018, PDF

[3] Yin et al, GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose, CVPR 2018,Code

[4] Wang et al, Learning Depth from Monocular Videos using Direct Methods, CVPR 2018, Code

[5] Yang et al, LEGO: Learning Edge with Geometry all at Once by Watching Videos, CVPR 2018, Code

[6] Mahjourian et al, Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints, CVPR 2018, PDF

[7] Zhan et al, Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction, CVPR 2018, Web

[8] Ran et al, Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation, CVPR 2019, PDF, Code

[9] Bian et al, Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video, NIPS 2019, PDF, Code

[10] Luo et al, Consistent Video Depth Estimation, SIGGRAPH 2020, Web


[12] Guizilini et al, 3D Packing for Self-Supervised Monocular Depth Estimation, CVPR 2020, PDF Code

[13] Zhao et al, Towards Better Generalization: Joint Depth-Pose Learning without PoseNet, CVPR 2020 PDF Code

[14] Bian et al, Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video, NeurIPs 2019 PDF Code

3. Depth Completion/Super-resolution

[1] Cheng et al, Learning Depth with Convolutional Spatial Propagation Network, arXiv 2018, PDF, Code

[2] Zhang et al, Deep Depth Completion of a Single RGB-D Image, CVPR 2018, PDF

[3] Qiu et al, DeepLiDAR: Deep Surface Normal Guided Depth Prediction for Outdoor Scene from Sparse LiDAR Data and Single Color Image, CVPR 2019, PDF, Code

[4] Chen et al, Learning Joint 2D-3D Representations for Depth Completion, ICCV 2019, PDF

[5] Tang et al, Learning Guided Convolutional Network for Depth Completion, arXiv 2019, PDF, Code

4. Depth Fusion

[1] Marin et al, Reliable Fusion of ToF and Stereo Depth Driven by Confidence Measures, ECCV 2016, PDF

[2] Agresti et al, Deep Learning for Confidence Information in Stereo and ToF Data Fusion, ICCVW 2017, Web

5. Depth Dataset

[1] Srinivasan et al, Aperture Supervision for Monocular Depth Estimation, CVPR 2018, Code

[2] Li et al, MegaDepth: Learning Single-View Depth Prediction from Internet Photos, CVPR 2018, Web

[3] Monocular Relative Depth Perception with Web Stereo Data Supervision, CVPR 2018, PDF

[4] Li et al, Learning the depths of moving people by watching frozen people, CVPR 2019, Web

[5] Chen et al, Learning Single-Image Depth from Videos using Quality Assessment Networks, CVPR 2019, PDF

[6] Li et al, Learning the Depths of Moving People by Watching Frozen People, CVPR 2019(oral), PDF

[7] See Link for more conventional data sets.

6. 3D Photography

[1] Zhou et al, Stereo Magnification: Learning view synthesis using multiplane images, SIGGRAPH 2018, Web

[2] Niklaus et al, 3D Ken Burns Effect from a Single Image, SIGGRAPH 2019, PDF Code

[3] Wiles et al, SynSin: End-to-end View Synthesis from a Single Image, CVPR 2020, Web

[4] Shih et al, 3D Photography using Context-aware Layered Depth Inpainting, CVPR 2020, Web

[5] Mildenhall et al, Representing Scenes as Neural Radiance Fields for View Synthesis, arXiv 2020, Web

[6] Chen et al, Monocular Neural Image Based Rendering with Continuous View Control, ICCV 2019, Web

[7] Chen et al, Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction, AAAI 2018, PDF

[8] Kopf et al, One Shot 3D Photography, SIGGRAPH 2020 PDF Web

7. RGB-D Application

8. Optical Flow & Scene Flow

[1] Dosovitskiy et al, FlowNet: Learning optical flow with convolutional networks, CVPR 2015, PDF

[2] Yu et al, Back to basics: Unsupervised learning of optical flow via brightness constancy and motion smoothness, ECCV 2016 Workshop, PDF

[3] Bailer et al, CNN-based Patch Matching for Optical Flow with Thresholded Hinge Embedding Loss, CVPR 2017, PDF

[4] Ranjan et al, Optical Flow Estimation using a Spatial Pyramid Network(SpyNet), CVPR 2017, Code

[5] Ilg et al, FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks, CVPR 2017, Code

[6] Sun et al, PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume, CVPR 2018, Code

[7] Wang et al, Occlusion Aware Unsupervised Learning of Optical Flow, CVPR 2018, PDF

[6] Hui et al, LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation, CVPR 2018, PDF

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.