Need help with PyTorch-YOLOv3?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

eriklindernoren
6.0K Stars 2.4K Forks GNU General Public License v3.0 372 Commits 215 Opened issues

Description

Minimal PyTorch implementation of YOLOv3

Services available

!
?

Need anything else?

Contributors list

# 13,486
crowdso...
Python
CSS
Shell
143 commits
# 157
Python
Shell
data-mi...
deep-re...
129 commits
# 116,136
Shell
Groovy
time-se...
time-se...
3 commits
# 124,516
C
Python
Shell
3 commits
# 145,756
dynamic...
dtw
Shell
jupyter
2 commits
# 197,171
Python
Shell
1 commit
# 28,546
Python
pytorch
incepti...
iOS
1 commit
# 43,846
JavaScr...
Python
ci
Shell
1 commit
# 92,984
HTML
Django
prometh...
Shell
1 commit
# 187,579
Python
TypeScr...
neovim-...
vscode
1 commit
# 196,851
Python
Shell
1 commit
# 196,977
HTML
CSS
Shell
1 commit
# 21,090
data-mi...
Data vi...
Python
Shell
1 commit
# 191,836
crowdso...
HTML
CSS
Shell
1 commit

PyTorch-YOLOv3

A minimal PyTorch implementation of YOLOv3, with support for training, inference and evaluation.

Ubuntu CI PyPI pyversions PyPI license

Installation

Installing from source

For normal training and evaluation we recommend installing the package from source using a poetry virtual enviroment.

git clone https://github.com/eriklindernoren/PyTorch-YOLOv3
cd PyTorch-YOLOv3/
pip3 install poetry --user
poetry install

You need to join the virtual enviroment by runing

poetry shell
in this directory before running any of the following commands without the
poetry run
prefix. Also have a look at the other installing method, if you want to use the commands everywhere without opening a poetry-shell.

Download pretrained weights

./weights/download_weights.sh

Download COCO

./data/get_coco_dataset.sh

Install via pip

This installation method is recommended, if you want to use this package as a dependency in another python project. This method only includes the code, is less isolated and may conflict with other packages. Weights and the COCO dataset need to be downloaded as stated above. See API for further information regarding the packages API. It also enables the CLI tools

yolo-detect
,
yolo-train
, and
yolo-test
everywhere without any additional commands.
pip3 install pytorchyolo --user

Test

Evaluates the model on COCO test dataset. To download this dataset as well as weights, see above.

poetry run yolo-test --weights weights/yolov3.weights

| Model | mAP (min. 50 IoU) | | ----------------------- |:-----------------:| | YOLOv3 608 (paper) | 57.9 | | YOLOv3 608 (this impl.) | 57.3 | | YOLOv3 416 (paper) | 55.3 | | YOLOv3 416 (this impl.) | 55.5 |

Inference

Uses pretrained weights to make predictions on images. Below table displays the inference times when using as inputs images scaled to 256x256. The ResNet backbone measurements are taken from the YOLOv3 paper. The Darknet-53 measurement marked shows the inference time of this implementation on my 1080ti card.

| Backbone | GPU | FPS | | ----------------------- |:--------:|:--------:| | ResNet-101 | Titan X | 53 | | ResNet-152 | Titan X | 37 | | Darknet-53 (paper) | Titan X | 76 | | Darknet-53 (this impl.) | 1080ti | 74 |

poetry run yolo-detect --images data/samples/

Train

For argument descriptions have a lock at

poetry run yolo-train --help

Example (COCO)

To train on COCO using a Darknet-53 backend pretrained on ImageNet run:

poetry run yolo-train --data config/coco.data  --pretrained_weights weights/darknet53.conv.74

Tensorboard

Track training progress in Tensorboard: * Initialize training * Run the command below * Go to http://localhost:6006/

poetry run tensorboard --logdir='logs' --port=6006

Storing the logs on a slow drive possibly leads to a significant training speed decrease.

You can adjust the log directory using

--logdir 
when running
tensorboard
and
yolo-train
.

Train on Custom Dataset

Custom model

Run the commands below to create a custom model definition, replacing

 with the number of classes in your dataset.
./config/create_custom_model.sh   # Will create custom model 'yolov3-custom.cfg'

Classes

Add class names to

data/custom/classes.names
. This file should have one row per class name.

Image Folder

Move the images of your dataset to

data/custom/images/
.

Annotation Folder

Move your annotations to

data/custom/labels/
. The dataloader expects that the annotation file corresponding to the image
data/custom/images/train.jpg
has the path
data/custom/labels/train.txt
. Each row in the annotation file should define one bounding box, using the syntax
label_idx x_center y_center width height
. The coordinates should be scaled
[0, 1]
, and the
label_idx
should be zero-indexed and correspond to the row number of the class name in
data/custom/classes.names
.

Define Train and Validation Sets

In

data/custom/train.txt
and
data/custom/valid.txt
, add paths to images that will be used as train and validation data respectively.

Train

To train on the custom dataset run:

poetry run yolo-train --model config/yolov3-custom.cfg --data config/custom.data

Add

--pretrained_weights weights/darknet53.conv.74
to train using a backend pretrained on ImageNet.

API

You are able to import the modules of this repo in your own project if you install the pip package

pytorchyolo
.

An example prediction call from a simple OpenCV python script would look like this:

import cv2
from pytorchyolo import detect, models

Load the YOLO model

model = models.load_model( "/yolov3.cfg", "/yolov3.weights")

Load the image as an numpy array

img = cv2.imread("")

Runs the YOLO model on the image

boxes = detect.detect_image(model, img)

print(boxes)

Output will be a numpy array in the following format:

[[x1, y1, x2, y2, confidence, class]]

For more advanced usage look at the method's doc strings.

Credit

YOLOv3: An Incremental Improvement

Joseph Redmon, Ali Farhadi

Abstract
We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that’s pretty swell. It’s a little bigger than last time but more accurate. It’s still fast though, don’t worry. At 320 × 320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 AP50 in 51 ms on a Titan X, compared to 57.5 AP50 in 198 ms by RetinaNet, similar performance but 3.8× faster. As always, all the code is online at https://pjreddie.com/yolo/.

[Paper] [Project Webpage] [Authors' Implementation]

@article{yolov3,
  title={YOLOv3: An Incremental Improvement},
  author={Redmon, Joseph and Farhadi, Ali},
  journal = {arXiv},
  year={2018}
}

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.