Need help with Yolact_minimal?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

174 Stars 54 Forks 112 Commits 6 Opened issues


Minimal PyTorch implementation of YOLACT.

Services available


Need anything else?

Contributors list

No Data


Minimal PyTorch implementation of Yolact:《YOLACT: Real-time Instance Segmentation》.
The original project is here.

This implementation simplified the original code, preserved the main function and made the network easy to understand.
This implementation has not been updated to Yolact++.

The network structure.

Example 0


PyTorch >= 1.1
Python >= 3.6
onnxruntime-gpu == 1.6.0 for CUDA 10.2
TensorRT ==
Other common packages.


# Build cython-nms 
python build_ext --inplace
  • Download COCO 2017 datasets, modify
    in 'res101_coco' in
  • Download weights.

Yolact trained weights.

|Backbone | box mAP | mask mAP | number of parameters | Google Drive |Baidu Cloud | |:---------:|:--------:|:--------:|:--------------------:|:------------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------:| |Resnet50 | 31.3 | 28.8 | 31.16 M |best28.8res50coco340000.pth |password: uu75 | |Resnet101 | 33.4 | 30.4 | 50.15 M |best30.4res101coco340000.pth |password: njsk | |swin_tiny | 34.3 | 32.1 | 34.58 M |best31.9swintinycoco_308000.pth |password: i8e9 |

ImageNet pre-trained weights.

| Backbone | Google Drive |Baidu Cloud | |:---------:|:---------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------:| | Resnet50 | backbone_res50.pth | password: juso | | Resnet101 | backbone_res101.pth | password: 5wsp | | swin_tiny | swin-tiny.pth | password: g0o2 |

Improvement log

2021.4.19. Use swintiny transformer as backbone, +1.0 box mAP, +1.4 mask mAP.
2021.1.7. Focal loss did not help, tried conf
alpha 4, 6, 7, 8.
2021.1.7. Less training iterations, 800k --> 680k with batch size 8.
2020.11.2. Improved data augmentation, use rectangle anchors, training is stable, infinite loss no longer appears.
2020.11.2. DDP training, train batch size increased to 16, +0.4 box mAP, +0.7 mask mAP (resnet101).


# Train with resnet101 backbone on one GPU with a batch size of 8 (default).
python -m torch.distributed.launch --nproc_per_node=1 --master_port=$((RANDOM)) --train_bs=8
# Train on multiple GPUs (i.e. two GPUs, 8 images per GPU).
export CUDA_VISIBLE_DEVICES=0,1  # Select the GPU to use.
python -m torch.distributed.launch --nproc_per_node=2 --master_port=$((RANDOM)) --train_bs=16
# Train with other configurations (res101_coco, res50_coco, res50_pascal, res101_custom, res50_custom, in total).
python -m torch.distributed.launch --nproc_per_node=1 --master_port=$((RANDOM)) --cfg=res50_coco
# Train with different batch_size (batch size should not be smaller than 4).
python -m torch.distributed.launch --nproc_per_node=1 --master_port=$((RANDOM)) --train_bs=4
# Train with different image size (anchor settings related to image size will be adjusted automatically).
python -m torch.distributed.launch --nproc_per_node=1 --master_port=$((RANDOM)) --img_size=400
# Resume training with a specified model.
python -m torch.distributed.launch --nproc_per_node=1 --master_port=$((RANDOM)) --resume=weights/latest_res101_coco_35000.pth
# Set evalution interval during training, set -1 to disable it.  
python -m torch.distributed.launch --nproc_per_node=1 --master_port=$((RANDOM)) --val_interval 8000
# Train on CPU.
python --train_bs=4

Use tensorboard

tensorboard --logdir=tensorboard_log/res101_coco


# Select the GPU to use.
# Evaluate on COCO val2017 (configuration will be parsed according to the model name).
# The metric API in this project can not get the exact COCO mAP, but the evaluation speed is fast. 
python --weight=weights/best_30.4_res101_coco_340000.pth
# To get the exact COCO mAP:
python --weight=weights/best_30.4_res101_coco_340000.pth --coco_api
# Evaluate with a specified number of images.
python --weight=weights/best_30.4_res101_coco_340000.pth --val_num=1000
# Evaluate with traditional nms.
python --weight=weights/best_30.4_res101_coco_340000.pth --traditional_nms


  • detect result
    Example 2
# Select the GPU to use.
# To detect images, pass the path of the image folder, detected images will be saved in `results/images`.
python --weight=weights/best_30.4_res101_coco_340000.pth --image=images
  • cutout object
    Example 3
    # Use --cutout to cut out detected objects.
    python --weight=weights/best_30.4_res101_coco_340000.pth --image=images --cutout
    # To detect videos, pass the path of video, detected video will be saved in `results/videos`:
    python --weight=weights/best_30.4_res101_coco_340000.pth --video=videos/1.mp4
    # Use --real_time to detect real-timely.
    python --weight=weights/best_30.4_res101_coco_340000.pth --video=videos/1.mp4 --real_time
  • linear combination result
    Example 4
# Use --hide_mask, --hide_score, --save_lincomb, --no_crop and so on to get different results.
python --weight=weights/best_30.4_res101_coco_340000.pth --image=images --save_lincomb

Transport to ONNX

python --weight='weights/best_30.4_res101_coco_340000.pth' --opset=12
# Detect with ONNX file, all the options are the same as those in ``.
python --weight='onnx_files/res101_coco.onnx' --image=images.

Accelerate with TensorRT

python --weight='onnx_files/res101_coco.onnx'
# Detect with TensorRT, all the options are the same as those in ``.
python --weight='trt_files/res101_coco.trt' --image=images.

Train on PASCAL_SBD datasets

  • Download PASCALSBD datasets from here, modify the path of the
    folder in
    ```Shell # Generate a coco-style json. python utils/ --folder
    path=/home/feiyu/Data/pascalsbd # Training. python -m torch.distributed.launch --nprocpernode=1 --masterport=$((RANDOM)) --cfg=res50_pascal ```

Train custom datasets

  • Install labelme
    pip install labelme
  • Use labelme to label your images, only ploygons are needed. The created json files are in the same folder with the images.
    Example 5
  • Prepare a 'labels.txt' like this, this first line: 'background' is always needed.
    Example 6
  • Prepare coco-style json, pass the paths of your image folder and the labels.txt. The image type is also needed. The 'customdataset' folder is a prepared example.
    ```Shell python utils/ --img
    dir=customdataset --labelname=cuatomdataset/labels.txt --imgtype=jpg ```
  • Edit
    Example 7
    Note that if there's only one class, the
    should be like
    ('dog', )
    . The final comma is necessary to make it as a tuple, or the number of classes would be
  • Choose a configuration ('res101custom' or 'res50custom') in
    , modify the corresponding
    . If you need to validate, prepare the validation dataset by the same way.
  • Then train.
    python -m torch.distributed.launch --nproc_per_node=1 --master_port=$((RANDOM)) --cfg=res101_custom
  • Some parameters need to be taken care of by yourself: 1) Training batch size, try not to use batch size smaller than 4. 2) Anchor size, the anchor size should match with the object scale of your dataset. 3) Total training steps, learning rate decay steps and the warm up step, these should be decided according to the dataset size, overwrite
    in your configuration.


  author    = {Daniel Bolya and Chong Zhou and Fanyi Xiao and Yong Jae Lee},
  title     = {YOLACT: {Real-time} Instance Segmentation},
  booktitle = {ICCV},
  year      = {2019},
  title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
  author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
  journal={arXiv preprint arXiv:2103.14030},

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.