Need help with adversarial_image_defenses?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

facebookarchive
435 Stars 65 Forks Other 10 Commits 0 Opened issues

Description

Countering Adversarial Image using Input Transformations.

Services available

!
?

Need anything else?

Contributors list

Countering Adversarial Images Using Input Transformations

Overview

This package implements the experiments described in the paper Countering Adversarial Images Using Input Transformations. It contains implementations for adversarial attacks, defenses based image transformations, training, and testing convolutional networks under adversarial attacks using our defenses. We also provide pre-trained models.

If you use this code, please cite our paper:

  • Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens van der Maaten. Countering Adversarial Images using Input Transformations. arXiv 1711.00117, 2017. [PDF]

Adversarial Defenses

The code implements the following four defenses against adversarial images, all of which are based on image transformations: - Image quilting - Total variation minimization - JPEG compression - Pixel quantization

Please refer to the paper for details on these defenses. A detailed description of the original image quilting algorithm can be found here; a detailed description of our solver for total variation minimization can be found here.

Adversarial Attacks

The code implements the following four approaches to generating adversarial images: - Fast gradient sign method (FGSM) - Iterative FGSM - DeepFool - Carlini-Wagner attack

Installation

To use this code, first install Python, PyTorch, and Faiss (to perform image quilting). We tested the code using Python 2.7, PyTorch v0.2.0, and scikit-image 0.11; your mileage may vary when using other versions.

Pytorch can be installed using the instructions here. Faiss is required to run the image quilting algorithm; it is not automatically included because faiss does not have a pip support and because it requires configuring BLAS and LAPACK flags, as described here. Please install faiss using the instructions given here.

The code uses several other external dependencies (for training Inception models, performing Bregman iteration, etc.). These dependencies are automatically downloaded and installed when you install this package via

pip
: ```bash

Install from source

cd adversarialimagedefenses pip install .

# Usage

To import the package in Python:

```python import adversarial

The functionality implemented in this package is demonstrated in this example. Run the example via:

bash
python adversarial/examples/demo.py

API

The full functionality of the package is exposed via several runnable Python scripts. All these scripts require the user to specify the path to the Imagenet dataset, the path to pre-trained models, and the path to quilted images (once they are computed) in

lib/path_config.json
. Alternatively, the paths can be passed as input arguments into the scripts.

Generate quilting patches

index_patches.py
creates a faiss index of images patches. This index can be used to perform quilting of images.

Code example: ```python
import adversarial from indexpatches import createfaisspatches, parseargs

args = parse_args()

Update args if needed

args.patchsize = 5 createfaiss_patches(args) ```

Alternatively, run

python index_patches.py
. The following arguments are supported: -
--patch_size
Patch size (square) that will be used in quilting (default: 5). -
--num_patches
Number of patches to generate (default: 1000000). -
--pca_dims
PCA dimension for faiss (default: 64). -
--patches_file
File in which patches are saved. -
--index_file
File in which faiss index of patches is saved.

Image transformations

gen_transformed_images.py
has applies an image transformation on (adversarial or non-adversarial) ImageNet images, and saves them to disk. Image transformations such as image quilting are too computationally intensive to be performed on-the-fly during network training, which is why we precompute the transformed images.

Code example: ```python import adversarial from gentransformedimages import generatetransformedimages from lib import opts

load default args for transformation functions

args = opts.parseargs(opts.OptType.TRANSFORMATION) args.operation = "transformationonraw" args.defenses = ["tvm"] args.partitionsize = 1 # Number of samples to generate

generatetransformedimages(args) ```

Alternatively, run

python gen_transformed_images.py
. In addition to the common arguments and adversarial arguments, the following arguments are supported: -
--operation
Operation to run. Supported operations are:
transformation_on_raw
: Apply transformations on raw images.
transformation_on_adv
: Apply transformations on adversarial images.
cat_data
: Concatenate output from distributed
transformation_on_adv
. -
--data_type
Data type (
train
or
raw
) for
transformation_on_raw
(default:
train
). -
--out_dir
Directory path for output of
cat_data
. -
--partition_dir
Directory path to output transformed data. -
--data_batches
Number of data batches to generate. Used for random crops for ensembling. -
--partition
Distributed data partition (default: 0). -
--partition_size
The size of each data partition.
For
transformation_on_raw
, partitionsize represents number of classes for each process.
For `transformation
onadv`, partitionsize represents number of images for each process.
-
--n_threads
Number of threads for
transformation_on_raw
.

Generate TAR data index

Many file systems perform poorly when dealing with millions of small files (such as images). Therefore, we generally TAR our image datasets (obtained by running

generate_transformed_images
). Next, we use
gen_tar_index.py
to generate a file index for the TAR file. The file index facilitates fast, random-access reading of the TAR file; it is much faster and requires less memory than untarring the data or using
tarfile
package.

Code example: ```python
import adversarial from gentarindex import generatetarindex, parse_args

args = parseargs() generatetar_index(args) ```

Alternatively, run

python gen_tar_index.py
. The following arguments are supported: -
--tar_path
Path for TAR file or directory. -
--index_root
Directory in which to store TAR index file. -
--path_prefix
Prefix to identify TAR member names to be indexed.

Adversarial Attacks

gen_adversarial_images.py
implements the generation of adversarial images for the ImageNet dataset.

Code example: ```python import adversarial from genadversarialimages import generateadversarialimages from lib import opts

load default args for adversary functions

args = opts.parseargs(opts.OptType.ADVERSARIAL) args.model = "resnet50" args.adversarytogenerate = "fgs" args.partitionsize = 1 # Number of samples to generate args.datatype = "val" # input dataset type args.normalize = True # apply normalization on input data args.attacktype = "blackbox" # For attack, use transformed models args.pretrained = True # Use pretrained model from model-zoo

generateadversarialimages(args) ```

Alternatively, run

python gen_adversarial_images.py
. For a list of the supported arguments, see common arguments and adversarial arguments.

Training

train_model.py
implements the training of convolutional networks on (transformed or non-transformed) ImageNet images.

Code example: ```python import adversarial from trainmodel import trainmodel from lib import opts

load default args

args = opts.parse_args(opts.OptType.TRAIN) args.defenses = None # defense=<(raw, tvm, quilting, jpeg, quantization)> args.model = "resnet50" args.normalize = True # apply normalization on input data

train_model(args) ```

Alternatively, run

python train_model.py
. In addition to the common arguments, the following arguments are supported: -
--resume
Resume training from checkpoint (if available). -
--lr
Initial learning rate defined in constants.py. -
--lr_decay
Exponential learning rate decay defined in constants.py. -
--lr_decay_stepsize
Decay learning rate after every stepsize epochs defined in constants.py. -
--momentum
Momentum (default: 0.9). -
--weight_decay
Amount of weight decay (default: 1e-4). -
--start_epoch
Index of first epoch (default: 0). -
--end_epoch
Index of last epoch (default: 90). -
--preprocessed_epoch_data
Augmented and transformed data for each epoch is pre-generated (default:
False
).

Testing

classify_images.py
implements the testing of a training convolutional network on an dataset of (adversarial or non-adversarial / transformed or non-transformed) ImageNet images.

Code exammple: ```python import adversarial from classifyimages import classifyimages from lib import opts

load default args

args = opts.parse_args(opts.OptType.CLASSIFY)

classify_images(args) ```

Alternatively, run

python classify_images.py
. In addition to the common arguments, the following arguments are supported: -
--ensemble
Ensembling type,
None
,
avg
,
max
(default:
None
). -
--ncrops
List of number of crops for each defense to use for ensembling (default:
None
). -
--crop_frac
List of crop fraction for each defense to use for ensembling (default:
None
). -
--crop_type
List of crop type(
center
,
random
,
sliding
(hardset for 9 crops)) for each defense to use for ensembling (default:
None
).

Pre-trained models

We provide pre-trained models that were trained on ImageNet images that were processed using total variation minimization (TVM) or image quilting can be downloaded from the following links (set the

models_root
argument to the path that contains these model model files):

Common arguments

The following arguments are used by multiple scripts, including

generate_transformed_images
,
train_model
, and
classify_images
:

Paths

  • --data_root
    Main data directory to save and read data.
  • --models_root
    Directory path to store/load models.
  • --tar_dir
    Directory path for transformed images(train/val) stored in TAR files.
  • --tar_index_dir
    Directory path for index files for transformed images in TAR files.
  • --quilting_index_root
    Directory path for quilting index files.
  • --quilting_patch_root
    Directory path for quilting patch files.

Train/Classifier params

  • --model
    Model to use (default:
    resnet50
    ).
  • --device
    Device to use: cpu or gpu (default:
    gpu
    ).
  • --normalize
    Normalize image data.
  • --batchsize
    Batch size for training and testing (default: 256).
  • --preprocessed_data
    Transformations/Defenses are already applied on saved images (default:
    False
    ).
  • --defenses
    List of defenses to apply:
    raw
    (no defense),
    tvm
    ,
    quilting
    ,
    jpeg
    ,
    quantization
    (default:
    None
    ).
  • --pretrained
    Use pretrained model from PyTorch model zoo (default:
    False
    ).

Tranformation params

  • --tvm_weight
    Regularization weight for total variation minimization (TVM).
  • --pixel_drop_rate
    Pixel drop rate to use in TVM.
  • --tvm_method
    Reconstruction method to use in TVM (default:
    bregman
    ).
  • --quilting_patch_size
    Patch size to use in image quilting.
  • --quilting_neighbors
    Number of nearest patches to sample from in image quilting (default: 1).
  • --quantize_depth
    Bit depth for quantization defense (default: 8).

Adversarial arguments

The following arguments are used whem generating adversarial images with

gen_transformed_images.py
:
  • --n_samples
    Maximum number of samples to test on.
  • --attack_type
    Attack type:
    None
    (no attack),
    blackbox
    ,
    whitebox
    (default:
    None
    ).
  • --adversary
    Adversary to use:
    fgs
    ,
    ifgs
    ,
    cwl2
    ,
    deepfool
    (default:
    None
    ).
  • --adversary_model
    Model to use for generating adversarial images (default:
    resnet50
    ).
  • --learning_rate
    Learning rate for iterative adversarial attacks (default: read from constants).
  • --adv_strength
    Adversarial strength for non-iterative adversarial attacks (default: read from constants).
  • --adversarial_root
    Path containing adversarial images.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.