Need help with multi-task-refinenet?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

159 Stars 36 Forks Other 3 Commits 3 Opened issues


Multi-Task (Joint Segmentation / Depth / Surface Normas) Real-Time Light-Weight RefineNet

Services available


Need anything else?

Contributors list

# 21,709
3 commits

Real-Time Joint Semantic Segmentation, Depth and Surface Normals Estimation (in PyTorch)


This repository provides official models from the paper

Real-Time Joint Semantic Segmentation and Depth Estimation Using Asymmetric Annotations
, available here
Real-Time Joint Semantic Segmentation and Depth Estimation Using Asymmetric Annotations
Vladimir Nekrasov, Thanuja Dharmasiri, Andrew Spek, Tom Drummond, Chunhua Shen, Ian Reid
In ICRA 2019

Getting Started

For flawless reproduction of our results, the Ubuntu OS is recommended. The models have been tested using Python 2.7.



To install required Python packages, please run

pip install -r requirements.txt
(Python2) - use the flag
for local installation. The given examples can be run with, or without GPU.

Running examples

For the ease of reproduction, we have embedded all our examples inside Jupyter notebooks.

Jupyter Notebooks [Local]

If all the installation steps have been smoothly executed, you can proceed with running any of the notebooks provided in the

folder. To start the Jupyter Notebook server, on your local machine run
jupyter notebook
. This will open a web page inside your browser. If it did not open automatically, find the port number from the command's output and paste it into your browser manually. After that, navigate to the repository folder and choose any of the examples given.

More to come

Once time permits, more things will be added to this repository:

  • ~~Training and evaluation examples~~ please refer to this repository.

More projects to check out

  1. This project heavily relies on Light-Weight RefineNet


For academic usage, this project is licensed under the 2-clause BSD License - see the LICENSE file for details. For commercial usage, please contact the authors.


  • University of Adelaide and Australian Centre for Robotic Vision (ACRV) for making this project happen
  • HPC Phoenix cluster at the University of Adelaide for making the training of the models possible
  • PyTorch developers
  • Yerba mate tea

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.