Need help with SMAP?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

zju3dv
154 Stars 26 Forks Apache License 2.0 14 Commits 4 Opened issues

Description

Code for "SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation", ECCV 2020

Services available

!
?

Need anything else?

Contributors list

# 210,144
Jupyter...
cvpr
C
Shell
2 commits

SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation

SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation
Jianan Zhen*, Qi Fang*, Jiaming Sun, Wentao Liu, Wei Jiang, Hujun Bao, Xiaowei Zhou
ECCV 2020
Project Page

Introduction

Requirements

  • PyTorch >= 1.3
  • gcc >= 6.0

Start

# install requirements
pip3 install -r requirements.txt

install depth-aware part association lib (c++ and cuda)

requirement: gcc >= 6

cd extensions

make the cuda path in setup.py right

python setup.py install

Run inference on custom images

Our pretrained model can be downloaded from SMAP and RefineNet.
```bash cd exps/stage3_root2

vim test.sh

set dataset_path

set -t run_inference

set -d test

bash test.sh ``` The result will be saved in a json file located in "modellogs/stage3root2/result/". You can use the visualization code in "lib/visualize". Note that when evaluation, we will reproject the 2D keypoints again with the refined depths if RefineNet is used, because RefineNet may change the strict projection constraint.

Prepare Data for training

Put all data under the "data/" folder with symbolic link according to the specified directory structure. Or you can change the path in "dataset/datasettings.py". ``` $PROJECTHOME |-- data | |-- coco2017 | | |-- annotations | | |-- train2017 | |-- MuCo | | |-- annotations | | |-- images | |-- ... ```
Sources: MuCo is provided by Moon.
Only the mpi15 skeleton is supported for all stages now. Our data formats (annotation json, keypoint ordering, etc.) are in lib/preprocess/data_format.md. We convert all datasets to our format.

Train

# step 1: train smap network
cd exps/stage3_root2
# vim train.sh
# change the $PROJECT_HOME to the absolute path of the project
# set $CUDA_VISIBLE_DEVICES and nproc_per_node if using distributed training
bash train.sh

step 2: generate training data for refinenet

vim test.sh

set -t generate_train

set -d generation (if using training data); -d test (if using test data)

bash test.sh

step 3: train refinenet

cd exps/refinenet_root2 bash train.sh

Acknowledgements

Some CUDA codes are based on OpenPose. We also would like to thank Qing Shuai, Zhize Zhou and Zichen Tian for their help.

Contact

Please open an issue or send an email to Qi Fang ([email protected]) if you have any questions.

Copyright

This work is affliated with ZJU-SenseTime Joint Lab of 3D Vision, and its intellectual property belongs to SenseTime Group Ltd.

Copyright SenseTime. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.