Github url

py-faster-rcnn

by rbgirshick

rbgirshick /py-faster-rcnn

Faster R-CNN (Python implementation) -- see https://github.com/ShaoqingRen/faster_rcnn for the offic...

6.9K Stars 4.0K Forks Last release: Not found Other 201 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

py-faster-rcnn has been deprecated. Please see Detectron, which includes an implementation of Mask R-CNN.

Disclaimer

The official Faster R-CNN code (written in MATLAB) is available here. If your goal is to reproduce the results in our NIPS 2015 paper, please use the official code.

This repository contains a Python reimplementation of the MATLAB code. This Python implementation is built on a fork of Fast R-CNN. There are slight differences between the two implementations. In particular, this Python port - is ~10% slower at test-time, because some operations execute on the CPU in Python layers (e.g., 220ms / image vs. 200ms / image for VGG16) - gives similar, but not exactly the same, mAP as the MATLAB version - is not compatible with models trained using the MATLAB code due to the minor implementation differences - includes approximate joint training that is 1.5x faster than alternating optimization (for VGG16) -- see these slides for more information

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

By Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun (Microsoft Research)

This Python implementation contains contributions from Sean Bell (Cornell) written during an MSR internship.

Please see the official README.md for more details.

Faster R-CNN was initially described in an arXiv tech report and was subsequently published in NIPS 2015.

License

Faster R-CNN is released under the MIT License (refer to the LICENSE file for details).

Citing Faster R-CNN

If you find Faster R-CNN useful in your research, please consider citing:

@inproceedings{renNIPS15fasterrcnn, Author = {Shaoqing Ren and Kaiming He and Ross Girshick and Jian Sun}, Title = {Faster {R-CNN}: Towards Real-Time Object Detection with Region Proposal Networks}, Booktitle = {Advances in Neural Information Processing Systems ({NIPS})}, Year = {2015} }

Contents

  1. Requirements: software
  2. Requirements: hardware
  3. Basic installation
  4. Demo
  5. Beyond the demo: training and testing
  6. Usage

Requirements: software

NOTE If you are having issues compiling and you are using a recent version of CUDA/cuDNN, please consult this issue for a workaround

  1. Requirements for
    Caffe
    and
    pycaffe
    (see: Caffe installation instructions)

Note: Caffe must be built with support for Python layers!

# In your Makefile.config, make sure to have this line uncommented WITH\_PYTHON\_LAYER := 1 # Unrelatedly, it's also recommended that you use CUDNN USE\_CUDNN := 1

You can download my Makefile.config for reference. 2. Python packages you might not have:

cython

,

python-opencv

,

easydict
  1. [Optional] MATLAB is required for official PASCAL VOC evaluation only. The code now includes unofficial Python evaluation code.

    Requirements: hardware

  2. For training smaller networks (ZF, VGG_CNN_M_1024) a good GPU (e.g., Titan, K20, K40, ...) with at least 3G of memory suffices

  3. For training Fast R-CNN with VGG16, you'll need a K40 (~11G of memory)

  4. For training the end-to-end version of Faster R-CNN with VGG16, 3G of GPU memory is sufficient (using CUDNN)

Installation (sufficient for the demo)

  1. Clone the Faster R-CNN repository ```Shell

Make sure to clone with --recursive

git clone --recursive https://github.com/rbgirshick/py-faster-rcnn.git ```

  1. We'll call the directory that you cloned Faster R-CNN into

FRCN\_ROOT

Ignore notes 1 and 2 if you followed step 1 above.

Note 1: If you didn't clone Faster R-CNN with the

--recursive

flag, then you'll need to manually clone the

caffe-fast-rcnn

submodule:

Shell git submodule update --init --recursive

Note 2: The

caffe-fast-rcnn

submodule needs to be on the

faster-rcnn

branch (or equivalent detached state). This will happen automatically if you followed step 1 instructions.

  1. Build the Cython modules

Shell cd $FRCN\_ROOT/lib make
  1. Build Caffe and pycaffe ```Shell cd $FRCN_ROOT/caffe-fast-rcnn

Now follow the Caffe installation instructions here:

http://caffe.berkeleyvision.org/installation.html

If you're experienced with Caffe and have all of the requirements installed

and your Makefile.config in place, then simply do:

make -j8 && make pycaffe ```

  1. Download pre-computed Faster R-CNN detectors

Shell cd $FRCN\_ROOT ./data/scripts/fetch\_faster\_rcnn\_models.sh

This will populate the

$FRCN\_ROOT/data

folder with

faster\_rcnn\_models

. See

data/README.md

for details. These models were trained on VOC 2007 trainval.

Demo

After successfully completing basic installation, you'll be ready to run the demo.

To run the demo

Shell cd $FRCN\_ROOT ./tools/demo.py

The demo performs detection using a VGG16 network trained for detection on PASCAL VOC 2007.

Beyond the demo: installation for training and testing models

  1. Download the training, validation, test data and VOCdevkit

wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval\_06-Nov-2007.tar wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest\_06-Nov-2007.tar wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCdevkit\_08-Jun-2007.tar
  1. Extract all of these tars into one directory named

VOCdevkit
tar xvf VOCtrainval\_06-Nov-2007.tar tar xvf VOCtest\_06-Nov-2007.tar tar xvf VOCdevkit\_08-Jun-2007.tar
  1. It should have this basic structure

$VOCdevkit/ # development kit $VOCdevkit/VOCcode/ # VOC utility code $VOCdevkit/VOC2007 # image sets, annotations, etc. # ... and several other directories ...
  1. Create symlinks for the PASCAL VOC dataset

cd $FRCN\_ROOT/data ln -s $VOCdevkit VOCdevkit2007

Using symlinks is a good idea because you will likely want to share the same PASCAL dataset installation between multiple projects.

  1. [Optional] follow similar steps to get PASCAL VOC 2010 and 2012

[Optional] If you want to use COCO, please see some notes under

data/README.md
  1. Follow the next sections to download pre-trained ImageNet models

Download pre-trained ImageNet models

Pre-trained ImageNet models can be downloaded for the three networks described in the paper: ZF and VGG16.

cd $FRCN\_ROOT ./data/scripts/fetch\_imagenet\_models.sh

VGG16 comes from the Caffe Model Zoo, but is provided here for your convenience. ZF was trained at MSRA.

Usage

To train and test a Faster R-CNN detector using the alternating optimization algorithm from our NIPS 2015 paper, use

experiments/scripts/faster\_rcnn\_alt\_opt.sh

. Output is written underneath

$FRCN\_ROOT/output

.

cd $FRCN\_ROOT ./experiments/scripts/faster\_rcnn\_alt\_opt.sh [GPU\_ID] [NET] [--set ...] # GPU\_ID is the GPU you want to train on # NET in {ZF, VGG\_CNN\_M\_1024, VGG16} is the network arch to use # --set ... allows you to specify fast\_rcnn.config options, e.g. # --set EXP\_DIR seed\_rng1701 RNG\_SEED 1701

("alt opt" refers to the alternating optimization training algorithm described in the NIPS paper.)

To train and test a Faster R-CNN detector using the approximate joint training method, use

experiments/scripts/faster\_rcnn\_end2end.sh

. Output is written underneath

$FRCN\_ROOT/output

.

cd $FRCN\_ROOT ./experiments/scripts/faster\_rcnn\_end2end.sh [GPU\_ID] [NET] [--set ...] # GPU\_ID is the GPU you want to train on # NET in {ZF, VGG\_CNN\_M\_1024, VGG16} is the network arch to use # --set ... allows you to specify fast\_rcnn.config options, e.g. # --set EXP\_DIR seed\_rng1701 RNG\_SEED 1701

This method trains the RPN module jointly with the Fast R-CNN network, rather than alternating between training the two. It results in faster (~ 1.5x speedup) training times and similar detection accuracy. See these slides for more details.

Artifacts generated by the scripts in

tools

are written in this directory.

Trained Fast R-CNN networks are saved under:

output/<experiment directory>/<dataset name>/
</dataset></experiment>

Test outputs are saved under:

output/<experiment directory>/<dataset name>/<network snapshot name>/
</network></dataset></experiment>

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.