Need help with keras-yolo3?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

qqwweee
6.4K Stars 3.2K Forks MIT License 24 Commits 485 Opened issues

Description

A Keras implementation of YOLOv3 (Tensorflow backend)

Services available

!
?

Need anything else?

Contributors list

keras-yolo3

license

Introduction

A Keras implementation of YOLOv3 (Tensorflow backend) inspired by allanzelener/YAD2K.


Quick Start

  1. Download YOLOv3 weights from YOLO website.
  2. Convert the Darknet YOLO model to a Keras model.
  3. Run YOLO detection.
wget https://pjreddie.com/media/files/yolov3.weights
python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5
python yolo_video.py [OPTIONS...] --image, for image detection mode, OR
python yolo_video.py [video_path] [output_path (optional)]

For Tiny YOLOv3, just do in a similar way, just specify model path and anchor path with

--model model_file
and
--anchors anchor_file
.

Usage

Use --help to see usage of yolovideo.py: ``` usage: yolovideo.py [-h] [--model MODEL] [--anchors ANCHORS] [--classes CLASSES] [--gpunum GPUNUM] [--image] [--input] [--output]

positional arguments: --input Video input path --output Video output path

optional arguments: -h, --help show this help message and exit --model MODEL path to model weight file, default modeldata/yolo.h5 --anchors ANCHORS path to anchor definitions, default modeldata/yoloanchors.txt --classes CLASSES path to class definitions, default modeldata/cococlasses.txt --gpunum GPU_NUM Number of GPU to use, default 1 --image Image detection mode, will ignore all positional arguments

```

  1. MultiGPU usage: use
    --gpu_num N
    to use N GPUs. It is passed to the Keras multigpumodel().

Training

  1. Generate your own annotation file and class names file.
    One row for one image;
    Row format:

    image_file_path box1 box2 ... boxN
    ;
    Box format:
    x_min,y_min,x_max,y_max,class_id
    (no space).
    For VOC dataset, try
    python voc_annotation.py

    Here is an example:
    path/to/img1.jpg 50,100,150,200,0 30,50,200,120,3
    path/to/img2.jpg 120,300,250,600,2
    ...
    
  2. Make sure you have run

    python convert.py -w yolov3.cfg yolov3.weights model_data/yolo_weights.h5

    The file modeldata/yoloweights.h5 is used to load pretrained weights.
  3. Modify train.py and start training.

    python train.py

    Use your trained weights or checkpoint weights with command line option
    --model model_file
    when using yolovideo.py Remember to modify class path or anchor path, with `--classes classfile
    and
    --anchors anchor_file`.

If you want to use original pretrained weights for YOLOv3:
1.

wget https://pjreddie.com/media/files/darknet53.conv.74

2. rename it as darknet53.weights
3.
python convert.py -w darknet53.cfg darknet53.weights model_data/darknet53_weights.h5

4. use modeldata/darknet53weights.h5 in train.py

Some issues to know

  1. The test environment is

    • Python 3.5.2
    • Keras 2.1.5
    • tensorflow 1.6.0
  2. Default anchors are used. If you use your own anchors, probably some changes are needed.

  3. The inference result is not totally the same as Darknet but the difference is small.

  4. The speed is slower than Darknet. Replacing PIL with opencv may help a little.

  5. Always load pretrained weights and freeze layers in the first stage of training. Or try Darknet training. It's OK if there is a mismatch warning.

  6. The training strategy is for reference only. Adjust it according to your dataset and your goal. And add further strategy if needed.

  7. For speeding up the training process with frozen layers train_bottleneck.py can be used. It will compute the bottleneck features of the frozen model first and then only trains the last layers. This makes training on CPU possible in a reasonable time. See this for more information on bottleneck features.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.