Need help with keras-yolo3?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

6.4K Stars 3.2K Forks MIT License 24 Commits 485 Opened issues


A Keras implementation of YOLOv3 (Tensorflow backend)

Services available


Need anything else?

Contributors list




A Keras implementation of YOLOv3 (Tensorflow backend) inspired by allanzelener/YAD2K.

Quick Start

  1. Download YOLOv3 weights from YOLO website.
  2. Convert the Darknet YOLO model to a Keras model.
  3. Run YOLO detection.
python yolov3.cfg yolov3.weights model_data/yolo.h5
python [OPTIONS...] --image, for image detection mode, OR
python [video_path] [output_path (optional)]

For Tiny YOLOv3, just do in a similar way, just specify model path and anchor path with

--model model_file
--anchors anchor_file


Use --help to see usage of ``` usage: [-h] [--model MODEL] [--anchors ANCHORS] [--classes CLASSES] [--gpunum GPUNUM] [--image] [--input] [--output]

positional arguments: --input Video input path --output Video output path

optional arguments: -h, --help show this help message and exit --model MODEL path to model weight file, default modeldata/yolo.h5 --anchors ANCHORS path to anchor definitions, default modeldata/yoloanchors.txt --classes CLASSES path to class definitions, default modeldata/cococlasses.txt --gpunum GPU_NUM Number of GPU to use, default 1 --image Image detection mode, will ignore all positional arguments


  1. MultiGPU usage: use
    --gpu_num N
    to use N GPUs. It is passed to the Keras multigpumodel().


  1. Generate your own annotation file and class names file.
    One row for one image;
    Row format:

    image_file_path box1 box2 ... boxN
    Box format:
    (no space).
    For VOC dataset, try

    Here is an example:
    path/to/img1.jpg 50,100,150,200,0 30,50,200,120,3
    path/to/img2.jpg 120,300,250,600,2
  2. Make sure you have run

    python -w yolov3.cfg yolov3.weights model_data/yolo_weights.h5

    The file modeldata/yoloweights.h5 is used to load pretrained weights.
  3. Modify and start training.


    Use your trained weights or checkpoint weights with command line option
    --model model_file
    when using Remember to modify class path or anchor path, with `--classes classfile
    --anchors anchor_file`.

If you want to use original pretrained weights for YOLOv3:


2. rename it as darknet53.weights
python -w darknet53.cfg darknet53.weights model_data/darknet53_weights.h5

4. use modeldata/darknet53weights.h5 in

Some issues to know

  1. The test environment is

    • Python 3.5.2
    • Keras 2.1.5
    • tensorflow 1.6.0
  2. Default anchors are used. If you use your own anchors, probably some changes are needed.

  3. The inference result is not totally the same as Darknet but the difference is small.

  4. The speed is slower than Darknet. Replacing PIL with opencv may help a little.

  5. Always load pretrained weights and freeze layers in the first stage of training. Or try Darknet training. It's OK if there is a mismatch warning.

  6. The training strategy is for reference only. Adjust it according to your dataset and your goal. And add further strategy if needed.

  7. For speeding up the training process with frozen layers can be used. It will compute the bottleneck features of the frozen model first and then only trains the last layers. This makes training on CPU possible in a reasonable time. See this for more information on bottleneck features.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.