Need help with yolo2-pytorch?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

435 Stars 101 Forks GNU Lesser General Public License v3.0 141 Commits 12 Opened issues


PyTorch implementation of the YOLO (You Only Look Once) v2

Services available


Need anything else?

Contributors list

# 87,631
73 commits
# 135,278
40 commits
# 5,994
3 commits

PyTorch implementation of the YOLO (You Only Look Once) v2

The YOLOv2 is one of the most popular one-stage object detector. This project adopts PyTorch as the developing framework to increase productivity, and utilize ONNX to convert models into Caffe 2 to benefit engineering deployment. If you are benefited from this project, a donation will be appreciated (via PayPal, 微信支付 or 支付宝).


  • Flexible configuration design. Program settings are configurable and can be modified (via configure file overlaping (-c/--config option) or command editing (-m/--modify option)) using command line argument.

  • Monitoring via TensorBoard. Such as the loss values and the debugging images (such as IoU heatmap, ground truth and predict bounding boxes).

  • Parallel model training design. Different models are saved into different directories so that can be trained simultaneously.

  • Using a NoSQL database to store evaluation results with multiple dimension of information. This design is useful when analyzing a large amount of experiment results.

  • Time-based output design. Running information (such as the model, the summaries (produced by TensorBoard), and the evaluation results) are saved periodically via a predefined time.

  • Checkpoint management. Several latest checkpoint files (.pth) are preserved in the model directory and the older ones are deleted.

  • NaN debug. When a NaN loss is detected, the running environment (data batch) and the model will be exported to analyze the reason.

  • Unified data cache design. Various dataset are converted into a unified data cache via corresponding cache plugins. Some plugins are already implemented. Such as PASCAL VOC and MS COCO.

  • Arbitrarily replaceable model plugin design. The main deep neural network (DNN) can be easily replaced via configuration settings. Multiple models are already provided. Such as Darknet, ResNet, Inception v3 and v4, MobileNet and DenseNet.

  • Extendable data preprocess plugin design. The original images (in different sizes) and labels are processed via a sequence of operations to form a training batch (images with the same size, and bounding boxes list are padded). Multiple preprocess plugins are already implemented. Such as augmentation operators to process images and labels (such as random rotate and random flip) simultaneously, operators to resize both images and labels into a fixed size in a batch (such as random crop), and operators to augment images without labels (such as random blur, random saturation and random brightness).


  • [x] Reproduce the original paper's training results.
  • [x] Multi-scale training.
  • [x] Dimension cluster.
  • [x] Darknet model file (
    ) parser.
  • [x] Detection from image and camera.
  • [x] Processing Video file.
  • [x] Multi-GPU supporting.
  • [ ] Distributed training.
  • [ ] Focal loss.
  • [x] Channel-wise model parameter analyzer.
  • [x] Automatically change the number of channels.
  • [x] Receptive field analyzer.

Quick Start

This project uses Python 3. To install the dependent libraries, type the following command in a terminal.

sudo pip3 install -r requirements.txt
contains the examples to perform detection and evaluation. Run this script. Multiple datasets and models (the original Darknet's format, will be converted into PyTorch's format) will be downloaded (aria2 is required). These datasets are cached into different data profiles, and the models are evaluated over the cached data. The models are used to detect objects in an example image, and the detection results will be shown.


This project is released as the open source software with the GNU Lesser General Public License version 3 (LGPL v3).

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.