Fully Convolutional HarDNet for Segmentation in Pytorch
| Method | #Param
(M) | GMACs /
GFLOPs | Cityscapes
mIoU | fps on Titan-V
@1024x2048 | fps on 1080ti
@1024x2048 |
| :---: | :---: | :---: | :---: | :---: | :---: |
| ICNet | 7.7 | 30.7 | 69.5 | 63 | 48 |
| SwiftNetRN-18 | 11.8 | 104 | 75.5 | - | 39.9 |
| BiSeNet (1024x2048) | 13.4 | 119 | 77.7 | 36 | 27 |
| BiSeNet (768x1536) | 13.4 | 66.8 | 74.7 | 72** | 54** |
| FC-HarDNet-70 | 4.1 | 35.4 | 76.0 | 70 | 53 |
Setup config file
Please see the usage section in meetshah1995/pytorch-semseg
To train the model :
python train.py [-h] [--config [CONFIG]]--config Configuration file to use (default: hardnet.yml)
To validate the model :
usage: validate.py [-h] [--config [CONFIG]] [--model_path [MODEL_PATH]] [--save_image] [--eval_flip] [--measure_time]--config Config file to be used --model_path Path to the saved model --eval_flip Enable evaluation with flipped image | False by default --measure_time Enable evaluation with time (fps) measurement | True by default --save_image Enable writing result images to out_rgb (pred label blended images) and out_predID