repulsion_loss_ssd

by bailvwangzi

bailvwangzi / repulsion_loss_ssd

Repulsion Loss: Detecting Pedestrians in a Crowd. https://arxiv.org/abs/1711.07752

216 Stars 63 Forks Last release: Not found MIT License 8 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

Repulsion Loss implemented with SSD

Forked from PyTorch-SSD, which is a PyTorch implementation of Single Shot MultiBox Detector from the 2016 paper by Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang, and Alexander C. Berg. The official and original Caffe code can be found here.

Table of Contents

       

Installation

  • Install PyTorch by selecting your environment on the website and running the appropriate command.
  • Clone this repository.
    • Note: We currently only support Python 3+.
  • Then download the dataset by following the instructions below.
  • We now support Visdom for real-time loss visualization during training!
    • To use Visdom in the browser:
      Shell
      # First install Python server and client
      pip install visdom
      # Start the server (probably in a screen or tmux)
      python -m visdom.server
      
    • Then (during training) navigate to http://localhost:8097/ (see the Train section below for training details).
  • Note: For training, we currently support VOC and COCO, and aim to add ImageNet support soon.

Datasets

To make things easy, we provide bash scripts to handle the dataset downloads and setup for you. We also provide simple dataset loaders that inherit

torch.utils.data.Dataset
, making them fully compatible with the
torchvision.datasets
API.

COCO

Microsoft COCO: Common Objects in Context

Download COCO 2014
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/COCO2014.sh

VOC Dataset

PASCAL VOC: Visual Object Classes

Download VOC2007 trainval & test
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2007.sh # 
Download VOC2012 trainval
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2012.sh # 

Training SSD

  • First download the fc-reduced VGG-16 PyTorch base network weights at: https://s3.amazonaws.com/amdegroot-models/vgg16_reducedfc.pth
  • By default, we assume you have downloaded the file in the
    ssd.pytorch/weights
    dir:
mkdir weights
cd weights
wget https://s3.amazonaws.com/amdegroot-models/vgg16_reducedfc.pth
  • To train SSD using the train script simply specify the parameters listed in
    train.py
    as a flag or manually change them.
python train.py
  • Note:
    • For training, an NVIDIA GPU is strongly recommended for speed.
    • For instructions on Visdom usage/installation, see the Installation section.
    • You can pick-up training from a checkpoint by specifying the path as one of the training parameters (again, see
      train.py
      for options)

Evaluation

To evaluate a trained network:

python eval.py

You can specify the parameters listed in the

eval.py
file by flagging them or manually changing them.

Example

SSD:

SSD + repulsion loss:

Performance

VOC2007 Test

mAP

| Method | mAP | mAP on Crowd | |:-:|:-:|:-:| | SSD | 77.52% | 48.24% | | SSD+RepGT | 77.43% | 50.12% |

Demos

Use a pre-trained SSD network for detection

Download a pre-trained network

  • We are trying to provide PyTorch
    state_dicts
    (dict of weight tensors) of the latest SSD model definitions trained on different datasets.
  • Currently, we provide the following PyTorch models:
    • SSD300 trained on VOC0712 (newest PyTorch weights)
      • https://s3.amazonaws.com/amdegroot-models/ssd300mAP77.43_v2.pth
    • SSD300 trained on VOC0712 (original Caffe weights)
      • https://s3.amazonaws.com/amdegroot-models/ssd300VOC0712.pth
  • Our goal is to reproduce this table from the original paper

    SSD results on multiple datasets

Try the demo notebook

  • Make sure you have jupyter notebook installed.
  • Two alternatives for installing jupyter notebook:
    1. If you installed PyTorch with conda (recommended), then you should already have it. (Just navigate to the ssd.pytorch cloned repo and run):
      jupyter notebook
2. If using [pip](https://pypi.python.org/pypi/pip):
# make sure pip is upgraded
pip3 install --upgrade pip
# install jupyter notebook
pip install jupyter
# Run this inside ssd.pytorch
jupyter notebook
  • Now navigate to
    demo/demo.ipynb
    at http://localhost:8888 (by default) and have at it!

Try the webcam demo

  • Works on CPU (may have to tweak
    cv2.waitkey
    for optimal fps) or on an NVIDIA GPU
  • This demo currently requires opencv2+ w/ python bindings and an onboard webcam
    • You can change the default webcam in
      demo/live.py
  • Install the imutils package to leverage multi-threading on CPU:
    • pip install imutils
  • Running
    python -m demo.live
    opens the webcam and begins detecting!

TODO

We have accumulated the following to-do list, which we hope to complete in the near future - Still to come: * [x] Support for the MS COCO dataset * [ ] Support for SSD512 training and testing * [ ] Support for training on custom datasets * [ ] Support for RepBox term * [ ] Support for selecting the second largest IoU from the same class

Authors

References

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.