Need help with mmdetection-annotated?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

ming71
521 Stars 92 Forks Apache License 2.0 69 Commits 12 Opened issues

Description

mmdetection源码注释

Services available

!
?

Need anything else?

Contributors list

No Data

Notes!!

MMDetection-annotations have been update to latest version 1.0. I'll continue updating but may not chase after upgrades for latest version.

mmdetection-annotated

Introduction

Refer to the execllent implemention here:https://github.com/open-mmlab/mmdetection ,and thanks to author Kai Chen. Open-mmlab project , which contains various models and implementions of latest papers , achieves great results in detection/segmentataion tasks , and is kind enough for rookies in CV field.

Getting started

More information about installation or pre-train model downloads , pls refer to officia mmdetection or blog here * Test on images You can test on Faster RCNN demo by running the script

demo.py
. I have just rewritten the demo file to detect on single image or a folder as follow: ``` import os from mmdet.apis import initdetector, inferencedetector, show_result

if name == 'main': configfile = 'configs/fasterrcnnr50fpn1x.py' checkpointfile = 'weights/fasterrcnnr50fpn1x20181010-3d1b3351.pth' # checkpointfile = 'tools/workdirs/maskrcnnr101fpn1x/epoch1200.pth' imgpath = '/home/bit/下载/n07753592' model = initdetector(configfile, checkpointfile, device='cuda:0') # print(model) # 输入可以为文件夹或者图片 if os.path.isdir(imgpath): imgs= os.listdir(imgpath) for i in range(len(imgs)): imgs[i]=os.path.join(imgpath,imgs[i]) for i, result in enumerate(inferencedetector(model, imgs)): # 支持可迭代输入imgs print(i, imgs[i]) showresult(imgs[i], result, model.CLASSES, outfile='output/result_{}.jpg'.format(i))

elif os.path.isfile(img_path):
    result = inference_detector(model, img_path)
    show_result(img_path, result, model.CLASSES)
* **Debug**  
You can debug by setting breakpoint with method of adding `ipdb.set_trace()`.Before that , make sure of the success installment and import of **ipdb** package.
* **Hook**  
If you want to inspect on intermediate variables , `hook.py` can be a provision served as a reference for your work.
## Annotations
Annotations are attached everywhere in the code(surely only the part I have read , and the not finished part will be completed as soon as possible). Beside , `annotation` folder contains some interpreting documents as well.  
* **Dataset Example**   
Provide a simple small sample data set for testing (segmentation && detection) .More details referrd to instruction [here](https://blog.csdn.net/mingqi1996/article/details/96706619)

  • CUDA related code
    I've delete files in folder mmdet/ops cause no annotations attached inside.However it's a good news that specific notes are made about RoIAlign here .

  • Model visualization
    Take Mask-RCNN for example , the model can be visualized as follow:(more details refere to model-structure-png)

  • notes

  • Configuration
    Explicit describtion on config file , take Mask RCNN for example , refer to mask_rcnn_r101_fpn_1x.py

  • MMCV&MMDET
    Specification of mmcv lib and a partial of mmdet(more details about various models will be updated later ).

Detection Results

Test on Mask RCNN model:

Training

dataset

  • You can just use COCO dataset , refer here.
  • If you want to train on your customed dataset labeled by labelme , you need first convert json files to COCO style , this toolbox may help you ;
  • If you want to train on your customed dataset labeled by labelImg , you need first convert xml files to COCO style , this toolbox may also help you .
  • I have tested on these tools recently to make sure them still work well, if questiones still arised , desrcibe on issue please or contact me , thanks.

learning rate

Remember to set lr in config file according to your own GPU_NUM !!!!(eg.1/8 of default lr for 1 GPU)

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.