Need help with SPReID?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

emrahbasaran
134 Stars 30 Forks MIT License 102 Commits 26 Opened issues

Description

Code for our CVPR 2018 paper - Human Semantic Parsing for Person Re-identification

Services available

!
?

Need anything else?

Contributors list

No Data

Human Semantic Parsing for Person Re-identification

Code for our CVPR 2018 paper - Human Semantic Parsing for Person Re-identification We have used Chainer framework for the implementation. SPReIDw/fg and SPReIDw/fg-ft results mentioned in Table 5 (with weight sharing setting) in the paper can be reproduced using this code.

Please use the links below to download the semantic parsing model (LIPiter30000.chainermodel) and the inceptionv3 weights pre-trained on imagenet (data/dump/): * semantic parsing model * inceptionv3 weights

Directories & Files

/
├── checkpoints/  # checkpoint models are saved into this directory
│
├── data/dump/  # inceptionv3 weights pre-trained on imagenet. download using this [link] (https://www.dropbox.com/sh/x0ey09q1nq7ci39/AACRuJa_f8N0_gIFcEWZUZ7ja?dl=0)
│
├── evaluation_features/ # extracted features are saved into this directory
│
├── evaluation_list/ # there are two image lists to extract features for each evaluation datasets, one for gallery and one for query
│   ├── cuhk03_gallery.txt
│   ├── cuhk03_query.txt
│   ├── duke_gallery.txt
│   ├── duke_query.txt
│   ├── market_gallery.txt
│   └── market_query.txt
│
├── train_list/ # image lists to train the models
│   ├── train_10d.txt # training images collected from 10 datasets
│   ├── train_cuhk03.txt # training images from cuhk03
│   ├── train_duke.txt # training images from duke
│   └── train_market.txt # training images from market
│
├── LIP_iter_30000.chainermodel # download this model using this [link](https://www.dropbox.com/s/nw5h0lw6xrzp5ks/LIP_iter_30000.chainermodel?dl=0)
├── datachef.py
├── main.py
└── modelx.py

Train

cd $SPREID_ROOT
# train SPReID on 10 datasets
python main.py --train_set "train_10d" --label_dim "16803" --scales_reid "512,170" --optimizer "lr:0.01--lr_pretrained:0.01" --dataset_folder "/path/to/the/dataset"
# fine-tune SPReID on evaluation datasets (Market-1501, DukeMTMC-reID, CUHK03) with high-resolution images
python main.py --train_set "train_market" --label_dim_ft "751" --scales_reid "778,255" --optimizer "lr:0.01--lr_pretrained:0.001" --max_iter "50000" --dataset_folder "/path/to/the/dataset" --model_path_for_ft "/path/to/the/model"
python main.py --train_set "train_duke" --label_dim_ft "702" --scales_reid "778,255" --optimizer "lr:0.01--lr_pretrained:0.001" --max_iter "50000" --dataset_folder "/path/to/the/dataset" --model_path_for_ft "/path/to/the/model"
python main.py --train_set "train_cuhk03" --label_dim_ft "1367" --scales_reid "778,255" --optimizer "lr:0.01--lr_pretrained:0.001" --max_iter "50000" --dataset_folder "/path/to/the/dataset" --model_path_for_ft "/path/to/the/model"

Feature Extraction

cd $SPREID_ROOT
# Extract features using the model trained on 10 datasets. You should run this command two times for each dataset using --eval_split "DATASET_gallery" and --eval_split "DATASET_query"
python main.py --extract_features 1 --train_set "train_10d" --eval_split "market_gallery" --scales_reid "512,170" --checkpoint 200000 --dataset_folder "/path/to/the/dataset"
# Extract features using the models trained on evaluation datasets.
python main.py --extract_features 1 --train_set "train_market" --eval_split "market_gallery" --scales_reid "778,255" --checkpoint 50000 --dataset_folder "/path/to/the/dataset"
python main.py --extract_features 1 --train_set "train_duke" --eval_split "duke_gallery" --scales_reid "778,255" --checkpoint 50000 --dataset_folder "/path/to/the/dataset"
python main.py --extract_features 1 --train_set "train_cuhk03" --eval_split "cuhk03_gallery" --scales_reid "778,255" --checkpoint 50000 --dataset_folder "/path/to/the/dataset"

Results

Market-1501 CUHK03 DukeMTMC-reID
Model mAP(%) rank-1 mAP(%) rank-1 mAP(%) rank-1
SPReIDw/fg 77.62 90.88 - 87.69 65.66 81.73
SPReIDw/fg-ft 80.54 92.34 - 89.68 69.29 83.80

Citation

@InProceedings{Kalayeh_2018_CVPR,
author = {Kalayeh, Mahdi M. and Basaran, Emrah and Gökmen, Muhittin and Kamasak, Mustafa E. and Shah, Mubarak},
title = {Human Semantic Parsing for Person Re-Identification},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.