Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere.
This repository provides a PyTorch implementation of the alignment and uniformity metrics for unsupervised representation learning. These metrics are proposed in Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere.
These metrics/losses are useful for: 1. (as metrics) quantifying encoder feature distribution properties, 2. (as losses) directly training the encoder.
Requirements: + PyTorch >= 1.5.0
Thanks to their simple forms, these losses are implemented in just a few lines of code in
align_uniform/__init__.py: ```py
def align_loss(x, y, alpha=2): return (x - y).norm(p=2, dim=1).pow(alpha).mean()
def uniform_loss(x, t=2): return torch.pdist(x, p=2).pow(2).mul(-t).exp().mean().log() ```
After
import align_uniform, you can access them with ```py alignuniform.alignloss(x, y)
alignuniform.uniformloss(x) ```
We provide the following examples to perform unsupervised representation learning using these two losses: + STL-10 + ImageNet and ImageNet-100 with a MoCo Variant
Tongzhou Wang, Phillip Isola. "Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere". International Conference on Machine Learning. 2020.
@article{wang2020hypersphere, title={Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere}, author={Wang, Tongzhou and Isola, Phillip}, journal={arXiv preprint arXiv:2005.10242}, year={2020} }
For questions about the code provided in this repository, please open an GitHub issue.
For questions about the paper, please contact Tongzhou Wang (
tongzhou _AT_ mit _DOT_ edu).