Need help with SMAP?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

187 Stars 31 Forks Apache License 2.0 17 Commits 5 Opened issues


Code for "SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation", ECCV 2020

Services available


Need anything else?

Contributors list

SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation

SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation
Jianan Zhen*, Qi Fang*, Jiaming Sun, Wentao Liu, Wei Jiang, Hujun Bao, Xiaowei Zhou
ECCV 2020
Project Page



  • PyTorch >= 1.3
  • gcc >= 6.0


# install requirements
pip3 install -r requirements.txt

install depth-aware part association lib (c++ and cuda)

requirement: gcc >= 6

cd extensions

make the cuda path in right

python install

Run inference on custom images

Our pretrained model can be downloaded from SMAP and RefineNet.
```bash cd exps/stage3_root2


set dataset_path

set -t run_inference

set -d test

bash ``` The result will be saved in a json file located in "modellogs/stage3root2/result/". You can use the visualization code in "lib/visualize". Note that when evaluation, we will reproject the 2D keypoints again with the refined depths if RefineNet is used, because RefineNet may change the strict projection constraint.

Prepare Data for training

Put all data under the "data/" folder with symbolic link according to the specified directory structure. Or you can change the path in "dataset/". ``` $PROJECTHOME |-- data | |-- coco2017 | | |-- annotations | | |-- train2017 | |-- MuCo | | |-- annotations | | |-- images | |-- ... ```
Sources: MuCo is provided by Moon.
Only the mpi15 skeleton is supported for all stages now. Our data formats (annotation json, keypoint ordering, etc.) are in lib/preprocess/ We convert all datasets to our format.


# step 1: train smap network
cd exps/stage3_root2
# vim
# change the $PROJECT_HOME to the absolute path of the project
# set $CUDA_VISIBLE_DEVICES and nproc_per_node if using distributed training

step 2: generate training data for refinenet


set -t generate_train

set -d generation (if using training data); -d test (if using test data)


step 3: train refinenet

cd exps/refinenet_root2 bash


Some CUDA codes are based on OpenPose. We also would like to thank Qing Shuai, Zhize Zhou and Zichen Tian for their help.


Please open an issue or send an email to Qi Fang ([email protected]) if you have any questions.


This work is affliated with ZJU-SenseTime Joint Lab of 3D Vision, and its intellectual property belongs to SenseTime Group Ltd.

Copyright SenseTime. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.