Need help with Multi-Task-Learning-PyTorch?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

SimonVandenhende
342 Stars 59 Forks Other 25 Commits 9 Opened issues

Description

PyTorch implementation of multi-task learning architectures, incl. MTI-Net (ECCV2020).

Services available

!
?

Need anything else?

Contributors list

No Data

Multi-Task Learning

This repo aims to implement several multi-task learning models and training strategies in PyTorch. The code base complements the following works:

Multi-Task Learning for Dense Prediction Tasks: A Survey

Simon Vandenhende, Stamatios Georgoulis, Wouter Van Gansbeke, Marc Proesmans, Dengxin Dai and Luc Van Gool.

MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning

Simon Vandenhende, Stamatios Georgoulis and Luc Van Gool.

An up-to-date list of works on multi-task learning can be found here.

Workshop

:loudspeaker: :loudspeaker: :loudspeaker: We are organizing a workshop on multi-task learning at ICCV 2021. More information can be found on our website. - April 10: We have confirmed eight excellent speakers, including Rich Caruana (Microsoft), Chelsea Finn (Stanford), Judy Hoffman (Georgia Tech), Iasonas Kokkinos (University College London), Andrew Rabinovich (Headroom inc.), Raquel Urtasun (University of Toronto), Luc Van Gool (Ku Leuven & ETH Zurich) and Amir Zamir (EPFL). - June 2: Submission website is now live.

Installation

The code runs with recent Pytorch version, e.g. 1.4. Assuming Anaconda, the most important packages can be installed as:

shell
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
conda install imageio scikit-image             # Image operations
conda install -c conda-forge opencv                # OpenCV
conda install pyyaml easydict                          # Configurations
conda install termcolor                                # Colorful print statements
We refer to the
requirements.txt
file for an overview of the package versions in our own environment.

Usage

Setup

The following files need to be adapted in order to run the code on your own machine: - Change the file paths to the datasets in

utils/mypath.py
, e.g.
/path/to/pascal/
. - Specify the output directory in
configs/your_env.yml
. All results will be stored under this directory. - The seism repository is needed to perform the edge evaluation. See the README in
./evaluation/seism/
. - If you want to use the HRNet backbones, please download the pre-trained weights here. The provided config files use an HRNet-18 backbone. Download the
hrnet_w18_small_model_v2.pth
and save it to the directory
./models/pretrained_models/
.

The datasets will be downloaded automatically to the specified paths when running the code for the first time.

Training

The configuration files to train the model can be found in the

configs/
directory. The model can be trained by running the following command:
python main.py --config_env configs/env.yml --config_exp configs/$DATASET/$MODEL.yml

Evaluation

We evaluate the best model at the end of training. The evaluation criterion is based on Equation 10 from our survey paper and requires to pre-train a set of single-tasking networks beforehand. To speed-up training, it is possible to evaluate the model only during the final 10 epochs by adding the following line to your config file:

eval_final_10_epochs_only: True

Support

The following datasets and tasks are supported.

| Dataset | Sem. Seg. | Depth | Normals | Edge | Saliency | Human Parts | |---------|-----------|-------|---------|----------------|----------|-------------| | PASCAL | Y | N | Y | Y | Y | Y | | NYUD | Y | Y | Aux | Aux | N | N |

The following models are supported.

| Backbone | HRNet | ResNet | |----------|----------|-----------| | Single-Task | Y | Y | | Multi-Task | Y | Y | | Cross-Stitch | | Y | | NDDR-CNN | | Y | | MTAN | | Y | | PAD-Net | Y | | | MTI-Net | Y | |

References

This code repository is heavily based on the ASTMT repository. In particular, the evaluation and dataloaders were taken from there.

Citation

If you find this repo useful for your research, please consider citing the following works:

@article{
  author={S. Vandenhende and S. Georgoulis and W. Van Gansbeke and M. Proesmans and D. Dai and L. Van Gool},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={Multi-Task Learning for Dense Prediction Tasks: A Survey}, 
  year={2021},
  volume={},
  number={},
  pages={1-1},
  doi={10.1109/TPAMI.2021.3054719}}

@article{vandenhende2020mti, title={MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning}, author={Vandenhende, Simon and Georgoulis, Stamatios and Van Gool, Luc}, journal={ECCV2020}, year={2020} }

@InProceedings{MRK19, Author = {Kevis-Kokitsi Maninis and Ilija Radosavovic and Iasonas Kokkinos}, Title = {Attentive Single-Tasking of Multiple Tasks}, Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, Year = {2019} }

@article{pont2015supervised, title={Supervised evaluation of image segmentation and object proposal techniques}, author={Pont-Tuset, Jordi and Marques, Ferran}, journal={IEEE transactions on pattern analysis and machine intelligence}, year={2015}, }

Updates

For more information see issue #1.

The initial code used the NYUDv2 dataloader from ASTMT. This implementation was different from the one we used to run our experiments in the survey. Therefore, we have re-written the NYUDv2 dataloader to be consistent with our survey results. To avoid any issues, it is best to remove your old version of the NYUDv2 dataset. The python script will then automatically download the correct version when using the NYUDv2 dataset.

The depth task is evaluated in a pixel-wise fashion to be consistent with the survey. This is different from ASTMT, which averages the results across the images.

License

This software is released under a creative commons license which allows for personal and research use only. For a commercial license please contact the authors. You can view a license summary here.

Acknoledgements

The authors acknowledge support by Toyota via the TRACE project and MACCHINA (KULeuven, C14/18/065).

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.