Deep learning optimization Python
Need help with YOPO-You-Only-Propagate-Once?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.
a1600012888

Description

Code for our nips19 paper: You Only Propagate Once: Accelerating Adversarial Training Via Maximal Principle

149 Stars 21 Forks 43 Commits 1 Opened issues

Services available

Need anything else?

YOPO (You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle)

Code for our paper: "You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle" by Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu, Bin Dong.

Our paper has been accepted by NeurIPS2019.

The Pipeline of YOPO

Prerequisites

  • Pytorch==1.0.1, torchvision
  • Python 3.5
  • tensorboardX
  • easydict
  • tqdm

Intall

git clone https://github.com/a1600012888/YOPO-You-Only-Propagate-Once.git
cd YOPO-You-Only-Propagate-Once
pip3 install -r requirements.txt --user

How to run our code

Natural training and PGD training

  • normal training:
    experiments/CIFAR10/wide34.natural
  • PGD adversarial training:
    experiments/CIFAR10/wide34.pgd10
    run
    python train.py -d 

You can change all the hyper-parameters in

config.py
. And change network in
network.py
Actually code in above mentioned director is very flexible and can be easiliy modified. It can be used as a template.

YOPO training

Go to directory

experiments/CIFAR10/wide34.yopo-5-3
run
python train.py -d 

You can change all the hyper-parameters in

config.py
. And change network in
network.py
Runing this code for the first time will dowload the dataset in
./experiments/CIFAR10/data/
, you can modify the path in
dataset.py

Miscellaneous

A C++ implementation by Nitin Shyamkumar is provided here! Thank you Nitin for your work!

The mainbody of

experiments/CIFAR10-TRADES/baseline.res-pre18.TRADES.10step
is written according to TRADES official repo

A tensorflow implementation provided by Runtian Zhai is provided here. The implemetation of the "For Free" paper is also included. It turns out that our YOPO is faster than "For Free" (detailed results will come soon). Thanks for Runtian's help!

Cite

@article{zhang2019you,
  title={You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle},
  author={Zhang, Dinghuai and Zhang, Tianyuan and Lu, Yiping and Zhu, Zhanxing and Dong, Bin},
  journal={arXiv preprint arXiv:1905.00877},
  year={2019}
}

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.