pytorch-YOLO-v1

by xiongzihua

xiongzihua / pytorch-YOLO-v1

an experiment for yolo-v1, including training and testing.

258 Stars 129 Forks Last release: Not found MIT License 16 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

pytorch YOLO-v1

中文 博客

This is a experimental repository, which are not exactly the same as the original paper, our performance on voc07test is 0.665 map, [email protected]

I write this code for the purpose of learning. In yoloLoss.py, i write forward only, with autograd mechanism, backward will be done automatically.

For the convenience of using pytorch pretrained model, our backbone network is resnet50, add an extra block to increase the receptive field, in addition, we drop Fully connected layer.

Effciency has not been optimized. It may be faster... I don't know

Train on voc2012+2007

| model | backbone | [email protected] | FPS | | -------------------- | -------------- | ---------- | ------- | | our ResNet_YOLO | ResNet50 | 66.5% | 57 | | YOLO | darknet19? | 63.4% | 45 | | YOLO VGG-16 | VGG-16 | 66.4% | 21 |

1. Dependency

  • pytorch 0.2.0_2
  • opencv
  • visdom
  • tqdm

2. Prepare

  1. Download voc2012train dataset
  2. Download voc2007test dataset
  3. put all images in one folder, i have provide txt annotation file ~~3. Convert xml annotations to txt file, for the purpose of using dataset.py, you should put the xml2txt.py in the same folder of voc dataset, or change Annotations path in xml2txt.py~~

3. Train

Run python train.py

Be careful: 1. change the image file path 2. I recommend you install visdom and run it

4. Evaluation

Run python eval_voc.py

be careful 1. change the image file path

5. result

Our map in voc2007 test set is 0.665~ some result are below, you can see more in testimg folder.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.