A collection of implementations of adversarial domain adaptation algorithms
A collection of implementations of adversarial unsupervised domain adaptation algorithms.
The goal of domain adaptation is to transfer the knowledge of a model to a different but related data distribution. The model is trained on a source dataset and applied to a target dataset (usually unlabeled). In this case, the model is trained on regular MNIST images, but we want to get good performance on MNIST with random color (without any labels).
In adversarial domain adaptation, this problem is usually solved by training an auxiliary model called the domain discriminator. The goal of this model is to classify examples as coming from the source or target distribution. The original classifier will then try to maximize the loss of the domain discriminator, comparable to the GAN training procedure.
Paper: Unsupervised Domain Adaptation by Backpropagation, Ganin & Lemptsky (2014)
Description: Negates the gradient of the discriminator for the feature extractor to train both networks simultaneously.
Paper: Adversarial Discriminative Domain Adaptation, Tzeng et al. (2017)
Description: Adapts the weights of a classifier pretrained on source data to produce similar features on the target data.
Paper: Wasserstein Distance Guided Representation Learning, Shen et al. (2017)
Description: Uses a domain critic to minimize the Wasserstein Distance (with Gradient Penalty) between domains.
|Accuracy on MNIST-M||Parameters|
--k-clf 10 --wd-clf 0.1
config.pyto this location.
$ conda install pytorch torchvision numpy -c pytorch $ pip install tqdm opencv-python
$ python train_source.py
$ python adda.py trained_models/source.pt