Need help with learn2learn?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

learnables
1.2K Stars 158 Forks MIT License 299 Commits 11 Opened issues

Description

A PyTorch Library for Meta-learning Research

Services available

!
?

Need anything else?

Contributors list


Test Status arXiv

learn2learn is a software library for meta-learning research.

learn2learn builds on top of PyTorch to accelerate two aspects of the meta-learning research cycle:

  • fast prototyping, essential in letting researchers quickly try new ideas, and
  • correct reproducibility, ensuring that these ideas are evaluated fairly.

learn2learn provides low-level utilities and unified interface to create new algorithms and domains, together with high-quality implementations of existing algorithms and standardized benchmarks. It retains compatibility with torchvision, torchaudio, torchtext, cherry, and any other PyTorch-based library you might be using.

To learn more, see our whitepaper: arXiv:2008.12284

Overview

  • learn2learn.data
    :
    TaskDataset
    and transforms to create few-shot tasks from any PyTorch dataset.
  • learn2learn.vision
    : Models, datasets, and benchmarks for computer vision and few-shot learning.
  • learn2learn.gym
    : Environment and utilities for meta-reinforcement learning.
  • learn2learn.algorithms
    : High-level wrappers for existing meta-learning algorithms.
  • learn2learn.optim
    : Utilities and algorithms for differentiable optimization and meta-descent.

Resources

Installation

pip install learn2learn

Snippets & Examples

The following snippets provide a sneak peek at the functionalities of learn2learn.

High-level Wrappers

Few-Shot Learning with MAML

For more algorithms (ProtoNets, ANIL, Meta-SGD, Reptile, Meta-Curvature, KFO) refer to the examples folder. Most of them can be implemented with with the

GBML
wrapper. (documentation). ~~~python maml = l2l.algorithms.MAML(model, lr=0.1) opt = torch.optim.SGD(maml.parameters(), lr=0.001) for iteration in range(10): opt.zerograd() taskmodel = maml.clone() # torch.clone() for nn.Modules adaptationloss = computeloss(taskmodel) taskmodel.adapt(adaptationloss) # computes gradient, update taskmodel in-place evaluationloss = computeloss(taskmodel) evaluationloss.backward() # gradients w.r.t. maml.parameters() opt.step() ~~~

Meta-Descent with Hypergradient

Learn any kind of optimization algorithm with the

LearnableOptimizer
. (example and documentation) ~~~python linear = nn.Linear(784, 10) transform = l2l.optim.ModuleTransform(l2l.nn.Scale) metaopt = l2l.optim.LearnableOptimizer(linear, transform, lr=0.01) # metaopt has .step() opt = torch.optim.SGD(metaopt.parameters(), lr=0.001) # metaopt also has .parameters()

metaopt.zerograd() opt.zerograd() error = loss(linear(X), y) error.backward() opt.step() # update metaopt metaopt.step() # update linear ~~~

Learning Domains

Custom Few-Shot Dataset

Many standardized datasets (Omniglot, mini-/tiered-ImageNet, FC100, CIFAR-FS) are readily available in

learn2learn.vision.datasets
. (documentation) ~~~python dataset = l2l.data.MetaDataset(MyDataset()) # any PyTorch dataset transforms = [ # Easy to define your own transform l2l.data.transforms.NWays(dataset, n=5), l2l.data.transforms.KShots(dataset, k=1), l2l.data.transforms.LoadData(dataset), ] taskset = TaskDataset(dataset, transforms, num_tasks=20000) for task in taskset: X, y = task # Meta-train on the task ~~~

Environments and Utilities for Meta-RL

Parallelize your own meta-environments with

AsyncVectorEnv
, or use the standardized ones. (documentation) ~~~python def make_env(): env = l2l.gym.HalfCheetahForwardBackwardEnv() env = cherry.envs.ActionSpaceScaler(env) return env

env = l2l.gym.AsyncVectorEnv([makeenv for _ in range(16)]) # uses 16 threads for taskconfig in env.sampletasks(20): env.settask(task) # all threads receive the same task state = env.reset() # use standard Gym API action = my_policy(env) env.step(action) ~~~

Low-Level Utilities

Differentiable Optimization

Learn and differentiate through updates of PyTorch Modules. (documentation) ~~~python

model = MyModel() transform = l2l.optim.KroneckerTransform(l2l.nn.KroneckerLinear) learnedupdate = l2l.optim.ParameterUpdate( # learnable update function model.parameters(), transform) clone = l2l.clonemodule(model) # torch.clone() for nn.Modules error = loss(clone(X), y) updates = learnedupdate( # similar API as torch.autograd.grad error, clone.parameters(), creategraph=True, ) l2l.updatemodule(clone, updates=updates) loss(clone(X), y).backward() # Gradients w.r.t model.parameters() and learnedupdate.parameters() ~~~

Changelog

A human-readable changelog is available in the CHANGELOG.md file.

Citation

To cite the

learn2learn
repository in your academic publications, please use the following reference.

Arnold, Sebastien M. R., Praateek Mahajan, Debajyoti Datta, Ian Bunner, and Konstantinos Saitas Zarkias. 2020. “learn2learn: A Library for Meta-Learning Research.” arXiv [cs.LG]. http://arxiv.org/abs/2008.12284.

You can also use the following Bibtex entry.

@article{Arnold2020-ss,
  title         = "learn2learn: A Library for {Meta-Learning} Research",
  author        = "Arnold, S{\'e}bastien M R and Mahajan, Praateek and Datta,
                   Debajyoti and Bunner, Ian and Zarkias, Konstantinos Saitas",
  month         =  aug,
  year          =  2020,
  url           = "http://arxiv.org/abs/2008.12284",
  archivePrefix = "arXiv",
  primaryClass  = "cs.LG",
  eprint        = "2008.12284"
}

Acknowledgements & Friends

  1. The RL environments are adapted from Tristan Deleu's implementations and from the ProMP repository. Both shared with permission, under the MIT License.
  2. TorchMeta is similar library, with a focus on datasets for supervised meta-learning.
  3. higher is a PyTorch library that enables differentiating through optimization inner-loops. While they monkey-patch
    nn.Module
    to be stateless, learn2learn retains the stateful PyTorch look-and-feel. For more information, refer to their ArXiv paper.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.