Need help with pytorch-deeplab-resnet?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

585 Stars 125 Forks MIT License 90 Commits 16 Opened issues


DeepLab resnet v2 model in pytorch

Services available


Need anything else?

Contributors list

# 58,867
89 commits


DeepLab resnet v2 model implementation in pytorch.

The architecture of deepLab-ResNet has been replicated exactly as it is from the caffe implementation. This architecture calculates losses on input images over multiple scales ( 1x, 0.75x, 0.5x ). Losses are calculated individually over these 3 scales. In addition to these 3 losses, one more loss is calculated after merging the output score maps on the 3 scales. These 4 losses are added to calculate the total loss.


18 July 2017 * One more evaluation script is added,
. The old evaluation script
uses a different methodoloy to take mean of IOUs than the one used by authors. Results section has been updated to incorporate this change.

24 June 2017

  • Now, weights over the 3 scales ( 1x, 0.75x, 0.5x ) are shared as in the caffe implementation. Previously, each of the 3 scales had seperate weights. Results are almost same after making this change (more in the results section). However, the size of the trained .pth model has reduced significantly. Memory occupied on GPU(11.9 GB) and time taken (~3.5 hours) during training are same as before. Links to corresponding .pth files have been updated.
  • Custom data can be used to train pytorch-deeplab-resnet using, flag --NoLabels (total number of labels in training data) has been added to and for this purpose. Please note that labels should be denoted by contiguous values (starting from 0) in the ground truth images. For eg. if there are 7 (nolabels) different labels, then each ground truth image must have these labels as 0,1,2,3,...6 (nolabels-1).

The older version (prior to 24 June 2017) is available here.


Note that this repository has been tested with python 2.7 only.

Converting released caffemodel to pytorch model

To convert the caffemodel released by authors, download the deeplab-resnet caffemodel (

) pretrained on VOC into the data folder. After that, run
to generate the corresponding pytorch model file (.pth). The generated .pth snapshot file can be used to get the exsct same test performace as offered by using the caffemodel in caffe (as shown by numbers in results section). If you do not want to generate the .pth file yourself, you can download it here.

To run
, deeplab v2 caffe and pytorch (python 2.7) are required.

If you want to train your model in pytorch, move to the next section.


Step 1: Convert

to a .pth file:
contains MS COCO trained weights. We use these weights as initilization for all but the final layer of our model. For the last layer, we use random gaussian with a standard deviation of 0.01 as the initialization. To convert
to a .pth file, run (or download the converted .pth here)
To run
init_net_surgery .py
, deeplab v2 caffe and pytorch (python 2.7) are required.

Step 2: Now that we have our initialization, we can train deeplab-resnet by running,

To get a description of each command-line arguments, run
python -h
To run
, pytorch (python 2.7) is required.

By default, snapshots are saved in every 1000 iterations in the data/snapshots. The following features have been implemented in this repository - * Training regime is the same as that of the caffe implementation - SGD with momentum is used, along with the

lr decay policy. A weight decay has been used. The last layer has
times the learning rate of other layers.
* The iter_size parameter of caffe has been implemented, effectively increasing the batch_size to batch_size times iter_size * Random flipping and random scaling of input has been used as data augmentation. The caffe implementation uses 4 fixed scales (0.5,0.75,1,1.25,1.5) while in the pytorch implementation, for each iteration scale is randomly picked in the range - [0.5,1.3]. * The boundary label (255 in ground truth labels) has not been ignored in the loss function in the current version, instead it has been merged with the background. The ignore_label caffe parameter would be implemented in the future versions. Post processing using CRF has not been implemented. * Batchnorm parameters are kept fixed during training. Also, caffe setting
use_global_stats = True
is reproduced during training. Running mean and variance are not calculated during training.

When run on a Nvidia Titan X GPU,
occupies about 11.9 GB of memory.


Evaluation of the saved models can be done by running

To get a description of each command-line arguments, run
python -h


When trained on VOC augmented training set (with 10582 images) using MS COCO pretrained initialization in pytorch, we get a validation performance of 72.40%(
, on VOC). The corresponding .pth file can be downloaded here. This is in comparision to 75.54% that is acheived by using
released by authors, which can be replicated by running this file . The
model converted from
using the first section also gives 75.54% mean IOU. A previous version of this file reported mean IOU of 78.48% on the pytorch trained model which is caclulated in a different way (
, Mean IOU is calculated for each image and these values are averaged together. This way of calculating mean IOU is different than the one used by authors).

To replicate this performance, run --lr 0.00025 --wtDecay 0.0005 --maxIter 20000 --GTpath  --IMpath  --LISTpath data/list/train_aug.txt


The model presented in the results section was trained using the augmented VOC train set which was released by this paper. You may download this augmented data directly from here.

Note that this code can be used to train pytorch-deeplab-resnet model for other datasets also.


A part of the code has been borrowed from

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.