Need help with centerpose?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

tensorboy
201 Stars 33 Forks MIT License 112 Commits 13 Opened issues

Description

Push the Extreme of the pose estimation

Services available

!
?

Need anything else?

Contributors list

# 419,358
C
C++
Shell
25 commits

The repo is based on CenterNet, which aimed for push the boundary of human pose estimation

multi person pose estimation using center point detection:

Main results

Keypoint detection on COCO validation 2017

| Backbone | AP | FPS | TensorRT Speed | GFLOPs |Download | |--------------|-----------|--------------|----------|----------|----------| |DLA-34 | 62.7 | 23 | - | - |model | |Resnet-50 | 54.5 | 28 | 33 | - |model | |MobilenetV3 | 46.0 | 30 | - | - |model | |ShuffleNetV2 | 43.9 | 25 | - | - |model | |HRNet_W32| 63.8 | 16 | - | - |model | |HardNet| 46.0 | 30 | - | - |model | |Darknet53| 34.2 | 30 | - | - |model | |EfficientDet| 38.2 | 30 | - | - |model |

Installation

git submodule init&git submodule update Please refer to INSTALL.md for installation instructions.

Use CenterNet

We support demo for image/ image folder, video, and webcam.

First, download the model DLA-34 from the Model zoo and put them in anywhere.

Run:

cd tools; python demo.py --cfg ../experiments/dla_34_512x512.yaml --TESTMODEL /your/model/path/dla34_best.pth --DEMOFILE ../images/33823288584_1d21cf0a26_k.jpg --DEBUG 1

The result for the example images should look like:

Evaluation

cd tools; python evaluate.py --cfg ../experiments/dla_34_512x512.yaml --TESTMODEL /your/model/path/dla34_best.pth --DEMOFILE --DEBUG 0

Training

After installation, follow the instructions in DATA.md to setup the datasets.

We provide config files for all the experiments in the experiments folder.

cd ./tools python -m torch.distributed.launch --nproc_per_node 4 train.py --cfg ../experiments/*yalm

Demo

the demo files located in the

demo
directory, which is would be a very robust human detection+tracking+face reid system.

License

MIT License (refer to the LICENSE file for details).

Citation

If you find this project useful for your research, please use the following BibTeX entry.

@inproceedings{zhou2019objects,
  title={Objects as Points},
  author={Zhou, Xingyi and Wang, Dequan and Kr{\"a}henb{\"u}hl, Philipp},
  booktitle={arXiv preprint arXiv:1904.07850},
  year={2019}
}

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.