Need help with Revisiting_Deep_Metric_Learning_PyTorch?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

245 Stars 40 Forks MIT License 34 Commits 3 Opened issues


(ICML 2020) This repo contains code for our paper "Revisiting Training Strategies and Generalization Performance in Deep Metric Learning" ( to facilitate consistent research in the field of Deep Metric Learning.

Services available


Need anything else?

Contributors list

# 82,642
31 commits

Deep Metric Learning Research in PyTorch

What can I find here?

This repository contains all code and implementations used in:

Revisiting Training Strategies and Generalization Performance in Deep Metric Learning

accepted to ICML 2020.


The code is meant to serve as a research starting point in Deep Metric Learning. By implementing key baselines under a consistent setting and logging a vast set of metrics, it should be easier to ensure that method gains are not due to implementational variations, while better understanding driving factors.

It is set up in a modular way to allow for fast and detailed prototyping, but with key elements written in a way that allows the code to be directly copied into other pipelines. In addition, multiple training and test metrics are logged in W&B to allow for easy and large-scale evaluation.

Finally, please find a public W&B repo with key runs performed in the paper here:

Contact: Karsten Roth, [email protected]

Suggestions are always welcome!

Some Notes:

If you use this code in your research, please cite

    title={Revisiting Training Strategies and Generalization Performance in Deep Metric Learning},
    author={Karsten Roth and Timo Milbich and Samarth Sinha and Prateek Gupta and Björn Ommer and Joseph Paul Cohen},

This repository contains (in parts) code that has been adapted from: * * * *

Make sure to also check out the following repo with a great plug-and-play implementation of DML methods: *

All implemented methods and metrics are listed at the bottom!

Paper-related Information

Reproduce results from our paper Revisiting Training Strategies and Generalization Performance in Deep Metric Learning

  • ALL standardized Runs that were used are available in
  • These runs are also logged in this public W&B repo:
  • All Runs and their respective metrics can be downloaded and evaluated to generate the plots in our paper by following
    . This also allows for potential introspection of other relations. It also converts results directly into Latex-table format with mean and standard deviations.
  • To utilize different batch-creation methods, simply set the flag
    to the method of choice. Allowed flags are listed in
  • To use the proposed spectral regularization for tuple-based methods, set
    --batch_mining rho_distance
    with flip probability
    --miner_rho_distance_cp e.g. 0.2
  • A script to run the toy experiments in the paper is provided in

Note: There may be small deviations in results based on the Hardware (e.g. between P100 and RTX GPUs) and Software (different PyTorch/Cuda versions) used to run these experiments, but they should be covered in the standard deviations reported in the paper.

How to use this Repo


  • PyTorch 1.2.0+ & Faiss-Gpu
  • Python 3.6+
  • pretrainedmodels, torchvision 0.3.0+

An exemplary setup of a virtual environment containing everything needed:

(1) wget
(2) bash (say yes to append path to bashrc)
(3) source .bashrc
(4) conda create -n DL python=3.6
(5) conda activate DL
(6) conda install matplotlib scipy scikit-learn scikit-image tqdm pandas pillow
(7) conda install pytorch torchvision faiss-gpu cudatoolkit=10.0 -c pytorch
(8) pip install wandb pretrainedmodels
(9) Run the scripts!


Data for * CUB200-2011 ( * CARS196 ( * Stanford Online Products (

can be downloaded either from the respective project sites or directly via Dropbox:

  • CUB200-2011 (1.08 GB):
  • CARS196 (1.86 GB):
  • SOP (2.84 GB):

The latter ensures that the folder structure is already consistent with this pipeline and the dataloaders.

Otherwise, please make sure that the datasets have the following internal structure:

  • For CUB200-2011/CARS196:

    |    └───001.Black_footed_Albatross
    |           │   Black_Footed_Albatross_0001_796111
    |           │   ...
    |    ...
  • For Stanford Online Products:

    |    └───bicycle_final
    |           │   111085122871_0.jpg
    |    ...
    |    │   bicycle.txt
    |    │   ...

Assuming your folder is placed in e.g.

, pass 
as input to


Training is done by using
and setting the respective flags, all of which are listed and explained in
. A vast set of exemplary runs is provided in

[I.] A basic sample run using default parameters would like this:

python --loss margin --batch_mining distance --log_online \
              --project DML_Project --group Margin_with_Distance --seed 0 \
              --gpu 0 --bs 112 --data_sampler class_random --samples_per_class 2 \
              --arch resnet50_frozen_normalize --source $datapath --n_epochs 150 \
              --lr 0.00001 --embed_dim 128 --evaluate_on_gpu

The purpose of each flag explained:

  • --loss 
    : Name of the training objective used. See folder
    for implementations of these methods.
  • --batch_mining 
    : Name of the batch-miner to use (for tuple-based ranking methods). See folder
    for implementations of these methods.
  • --log_online
    : Log metrics online via either W&B (Default) or CometML. Regardless, plots, weights and parameters are all stored offline as well.
  • --project
    : Project name as well as name of the run. Different seeds will be logged into the same
    online. The group as well as the used seed also define the local savename.
  • --seed
    : Basic Parameters setting the training seed, the used GPU and the path to the parent folder containing the respective Datasets.
  • --arch
    : The utilized backbone, e.g. ResNet50. You can append
    to the name to ensure that BatchNorm layers are frozen and embeddings are normalized, respectively.
  • --data_sampler
    : How to construct a batch. The default method,
    , selects classes at random and places
     samples into the batch until the batch is filled.
  • --lr
    : Learning rate, number of training epochs, the batchsize and the embedding dimensionality.
  • --evaluate_on_gpu
    : If set, all metrics are computed using the gpu - requires Faiss-GPU and may need additional GPU memory.

Some Notes:

  • During training, metrics listed in
    will be logged for both training and validation/test set. If you do not care about detailed training metric logging, simply set the flag
    . A checkpoint is saved for improvements in metrics listed in
    on training, validation or test sets. Detailed information regarding the available metrics can be found at the bottom of this
  • If one wishes to use a training/validation split, simply set

[II.] Advanced Runs:

python --loss margin --batch_mining distance --loss_margin_beta 0.6 --miner_distance_lower_cutoff 0.5 ... (basic parameters)
  • To use specific parameters that are loss, batchminer or e.g. datasampler-related, simply set the respective flag.
  • For structure and ease of use, parameters relating to a specifc loss function/batchminer etc. are marked as e.g.
    , see
  • However, every parameter can be called from every class, as all parameters are stored in a shared namespace that is passed to all methods. This makes it easy to create novel fusion losses and the likes.

Evaluating Results with W&B

Here some information on using W&B (highly encouraged!)

  • Create an account here (free):
  • After the account is set, make sure to include your API key in
  • To make sure that W&B data can be stored, ensure to run
    wandb on
    in the folder pointed to by
  • When data is logged online to W&B, one can use
    to download all data, create named metric and correlation plots and output a summary in the form of a latex-ready table with mean and standard deviations of all metrics. This ensures that there are no errors between computed and reported results.

Creating custom methods:

  1. Create custom objectives: Simply take a look at e.g.

    , and ensure that the used methods has the following properties:
    • Inherit from
      and define a custom
    • When using trainable parameters, make sure to either provide a
      to set the learning rate of the loss-specific parameters, or set
      , which is a list containing optimization dictionaries passed to the optimizer (see e.g
      ). If both are set,
      has priority.
    • Depending on the loss, remember to set the variables
      ALLOWED_MINING_OPS  = None or list of allowed mining operations
      REQUIRES_BATCHMINER = False or True
      REQUIRES_OPTIM = False or True
      to denote if the method needs a batchminer or optimization of internal parameters.
  2. Create custom batchminer: Simply take a look at e.g.

    - The miner needs to be a class with a defined
    -function, taking in a batch and labels and returning e.g. a list of triplets.
  3. Create custom datasamplers:Simply take a look at e.g.

    . The sampler needs to inherit from
    and has to provide a
    and a
    function. It has to yield a set of indices that are used to create the batch.

Implemented Methods

For a detailed explanation of everything, please refer to the supplementary of our paper!

DML criteria

DML batchminer



Evaluation Metrics

Metrics based on Euclidean Distances * [email protected]: Include [email protected] e.g. with

[email protected]
into the list of evaluation metrics
. * Normalized Mutual Information (NMI): Include with
. * F1: include with
. * mAP (class-averaged): Include standard mAP at Recall with
. You may also include
for mAP limited to R[email protected], and
limited to mAP at [email protected]NumSamplesPerClass. Note that all of these are heavily correlated.

Metrics based on Cosine Similarities (not included by default) * Cosine [email protected]: Cosine-Similarity variant of [email protected] Include with

[email protected]
. * Cosine Normalized Mutual Information (NMI): Include with
. * Cosine F1: include with
. * Cosine mAP (class-averaged): Include cosine similarity mAP at Recall variants with
. You may also include
for mAP limited to [email protected], and
limited to mAP at [email protected]NumSamplesPerClass.

Embedding Space Metrics * Spectral Variance: This metric refers to the spectral decay metric used in our ICML paper. Include it with

[email protected]
. To exclude the
largest spectral values for a more robust estimate, simply include
[email protected]+1
. Adding
[email protected]
logs the whole singular value distribution, and
[email protected]
computes KL(q,p) instead of KL(p,q). * Mean Intraclass Distance: Include the mean intraclass distance via
[email protected]
. * Mean Interclass Distance: Include the mean interlcass distance via
[email protected]
. * Ratio Intra- to Interclass Distance: Include the ratio of distances via
[email protected]_over_inter

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.