Need help with torchdistill?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

yoshitomo-matsubara
160 Stars 13 Forks MIT License 647 Commits 1 Opened issues

Description

PyTorch-based modular, configuration-driven framework for knowledge distillation. 🏆18 methods including SOTA are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy.

Services available

!
?

Need anything else?

Contributors list

# 88,037
Shell
C++
C
knowled...
591 commits

torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation

PyPI version Build Status

torchdistill (formerly kdkit) offers various knowledge distillation methods and enables you to design (new) experiments simply by editing a yaml file instead of Python code. Even when you need to extract intermediate representations in teacher/student models, you will NOT need to reimplement the models, that often change the interface of the forward, but instead specify the module path(s) in the yaml file.

Forward hook manager

Using ForwardHookManager, you can extract intermediate representations in model without modifying the interface of its forward function.
This example notebook will give you a better idea of the usage.

Top-1 validation accuracy for ILSVRC 2012 (ImageNet)

| T: ResNet-34* | Pretrained | KD | AT | FT | CRD | Tf-KD | SSKD | L2 | PAD-L2 |
| :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
| S: ResNet-18 | 69.76* | 71.37 | 70.90 | 71.56 | 70.93 | 70.52 | 70.09 | 71.08 | 71.71 |
| Original work | N/A | N/A | 70.70 | N/A** | 71.17 | 70.42 | 71.62 | 70.90 | 71.71 |

* The pretrained ResNet-34 and ResNet-18 are provided by torchvision.
** FT is assessed with ILSVRC 2015 in the original work.
For the 2nd row (S: ResNet-18), the checkpoint (trained weights), configuration and log files are available, and the configurations reuse the hyperparameters such as number of epochs used in the original work except for KD.

Examples

Executable code can be found in examples/ such as - Image classification: ImageNet (ILSVRC 2012), CIFAR-10, CIFAR-100, etc - Object detection: COCO 2017, etc - Semantic segmentation: COCO 2017, PASCAL VOC, etc

Google Colab Examples

CIFAR-10 and CIFAR-100

  • Training without teacher models Open In Colab
  • Knowledge distillation Open In Colab

These examples are available in demo/. Note that the examples are for Google Colab users, and usually examples/ would be a better reference if you have your own GPU(s).

Citation

[Preprint]

bibtex
@article{matsubara2020torchdistill,
  title={torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation},
  author={Matsubara, Yoshitomo},
  year={2020}
  eprint={2011.12913},
  archivePrefix={arXiv},
  primaryClass={cs.LG}
}

How to setup

  • Python 3.6 >=
  • pipenv (optional)

Install by pip/pipenv

pip3 install torchdistill
# or use pipenv
pipenv install torchdistill

Install from this repository

git clone https://github.com/yoshitomo-matsubara/torchdistill.git
cd torchdistill/
pip3 install -e .
# or use pipenv
pipenv install "-e ."

Issues / Contact

The documentation is work-in-progress. In the meantime, feel free to create an issue if you have a feature request or email me ( [email protected] ) if you would like to ask me in private.

References

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.