Need help with Keras-FasterRCNN?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

288 Stars 219 Forks MIT License 26 Commits 65 Opened issues


keras implementation of Faster R-CNN

Services available


Need anything else?

Contributors list

# 202,586
17 commits
# 262,692
8 commits


Keras implementation of Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.
cloned from


  • supporting inceptionresnetv2
    • for use inceptionresnetv2 in keras.application as feature extractor, create new inceptionresnetv2 model file using transfer/
    • if use original inceptionresnetv2 model as feature extractor, you can't load weight parameter on faster-rcnn


  • Both theano and tensorflow backends are supported. However compile times are very high in theano, and tensorflow is highly recommended.
    can be used to train a model. To train on Pascal VOC data, simply do:
    python -p /path/to/pascalvoc/
  • the Pascal VOC data set (images and annotations for bounding boxes around the classified objects) can be obtained from:
  • provides an alternative way to input data, using a text file. Simply provide a text file, with each line containing:


    For example:



    The classes will be inferred from the file. To use the simple parser instead of the default pascal voc style parser, use the command line option

    -o simple
    . For example
    python -o simple -p my_data.txt
  • Running
    will write weights to disk to an hdf5 file, as well as all the setting of the training run to a
    file. These settings can then be loaded by
    for any testing.
  • can be used to perform inference, given pretrained weights and a config file. Specify a path to the folder containing images: `python -p /path/to/test_data/`

  • Data augmentation can be applied by specifying

    for horizontal flips,
    for vertical flips and
    for 90 degree rotations


  • contains all settings for the train or test run. The default settings match those in the original Faster-RCNN paper. The anchor box sizes are [128, 256, 512] and the ratios are [1:1, 1:2, 2:1].
  • The theano backend by default uses a 7x7 pooling region, instead of 14x14 as in the frcnn paper. This cuts down compiling time slightly.
  • The tensorflow backend performs a resize on the pooling region, instead of max pooling. This is much more efficient and has little impact on results.

Example output:

ex1 ex2 ex3 ex4


  • If you get this error:

    ValueError: There is a negative shape in the graph!

    than update keras to the newest version
  • Make sure to use

    , not
    . If you get this error:
    TypeError: unorderable types: dict() < dict()
    you are using python3
  • If you run out of memory, try reducing the number of ROIs that are processed simultaneously. Try passing a lower

    . Alternatively, try reducing the image size from the default value of 600 (this setting is found in


[1] Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, 2015
[2] Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning, 2016

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.