NeurIPS'18: Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
NeurIPS'18: Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels (Pytorch implementation).
Another related work in NeurIPS'18:
Masking: A New Perspective of Noisy Supervision
Code available: https://github.com/bhanML/Masking
========
This is the code for the paper:
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama
To be presented at NeurIPS 2018.
If you find this code useful in your research then please cite
bash @inproceedings{han2018coteaching, title={Co-teaching: Robust training of deep neural networks with extremely noisy labels}, author={Han, Bo and Yao, Quanming and Yu, Xingrui and Niu, Gang and Xu, Miao and Hu, Weihua and Tsang, Ivor and Sugiyama, Masashi}, booktitle={NeurIPS}, pages={8535--8545}, year={2018} }
All code was developed and tested on a single machine equiped with a NVIDIA K80 GPU. The environment is as bellow:
Install PyTorch via:
bash pip install http://download.pytorch.org/whl/cu80/torch-0.3.0.post4-cp27-cp27mu-linux_x86_64.whl
Here is an example:
python main.py --dataset cifar10 --noise_type symmetric --noise_rate 0.5
| (Flipping, Rate) | MNIST | CIFAR-10 | CIFAR-100 | | ---------------: | -----: | -------: | --------: | | (Pair, 45%) | 87.58% | 72.85% | 34.40% | | (Symmetry, 50%) | 91.68% | 74.49% | 41.23% | | (Symmetry, 20%) | 97.71% | 82.18% | 54.36% |
Contact: Xingrui Yu ([email protected]); Bo Han ([email protected]).
Please check the automated machine learning (AutoML) version of Co-teaching in - Searching to Exploit Memorization Effect in Learning from Corrupted Labels. ICML-2020 paper code