Need help with PyVideoResearch?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

515 Stars 92 Forks GNU General Public License v3.0 776 Commits 11 Opened issues


A repository of common methods, datasets, and tasks for video research

Services available


Need anything else?

Contributors list

# 119,507
1 commit
# 270,270
1 commit


  • A repositsory of common methods, datasets, and tasks for video research

  • Please note that this repository is in the process of being released to the public. Please bear with us as we standardize the API and streamline the code.

  • Some of the baselines were run with an older version of the codebase (but the git commit hash is available for each experiment) and might need to be updated.

  • We encourage you to submit a Pull Request to help us document and incorporate as many baselines and datasets as possible to this codebase

  • We hope this project will be of value to the community and everyone will consider adding their methods to this codebase

List of implemented methods: * I3D * 3D ResNet * Asynchronous Temporal Fields * Actor Observer Network * Temporal Segment Networks * Temporal Relational Networks * Non-local neural networks * Two-Stream Networks * I3D Mask-RCNN * 3D ResNet Video Autoencoder

List of supported datasets: * Charades * CharadesEgo * Kinetics * AVA * ActivityNet * Something Something * Jester

List of supported tasks: * Action classification * Action localization * Spatial Action localization * Inpainting * Video Alignment * Triplet Classification

Contributor: Gunnar Atli Sigurdsson

  • If this code helps your research, please consider citing:
author = {Gunnar A. Sigurdsson and Abhinav Gupta},
title = {PyVideoResearch},
code = {},

and remember to cite the papers for the datasets/methods you use.

Installation Instructions

Requirements: * Python 2.7 or Python 3.6 * PyTorch 0.4 or PyTorch 1.0

Python packages: * numpy * ffmpeg-python * PIL * cv2 * torchvision

See external libraries under external/ for requirements if using their corresponding baselines.

Run the following to get both this repository and the remote repositories under external/

git clone [email protected]:gsig/PyVideoResearch.git
git submodule update --init --recursive

Steps to train your own network:

  1. Download the corresponding dataset
  2. Duplicate and edit one of the experiment files under exp/ with appropriate parameters. For additional parameters, see
  3. Run an experiment by calling python exp/ where is your experiment file. See baseline_exp/ for a variety of baselines.
  4. The checkpoints/logfiles/outputs are stored in your specified cache directory.
  5. Build of the code, cite our papers, and say hi to us at CVPR.

Good luck!

Pretrained networks:

We are in the process of preparing and releasing the pre-trained models. If anything is missing, please let us know. The names correspond to experiments under "baseline_exp". While we standardize the names, please be aware that some of the model may have names listed after "original name" in the experiment file. We also provide the generated log.txt file for each experiment as name.txt

The models are stored here:

  • ResNet50 pre-trained on Charades
    • resnet50_rgb.pth.tar
    • resnet50rgbpython3.pth.tar
  • ResNet1010 pre-trained on Charades
    • resnet101_rgb.pth.tar
    • resnet101rgbpython3.pth.tar
  • I3D pre-trained on ImageNet+Kinetics (courtesy of
    • ajrgbimagenet.pth
  • I3D pre-trained on Charades (courtesy of

    • ajrgbcharades.pth




    • async__par1.pth.tar
    • async__par1.txt





    • i3d31b.pth.tar
    • i3d31b.pth.tar

    • i3d8l.pth.tar
    • i3d8l.txt

    • i3d12b2.pth.tar
    • i3d12b2.txt

    • i3d8k.pth.tar
    • i3d8k.txt


    • trn4b.pth.tar
    • trn4b.txt


    • trn2f3b.pth.tar
    • trn2f3b.txt


    • anet2.pth.tar
    • anet2.txt

Infrequently Asked Questions

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.