Github url

insightface

by deepinsight

deepinsight /insightface

Face Analysis Project on MXNet

7.0K Stars 2.5K Forks Last release: Not found MIT License 1.2K Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

InsightFace: 2D and 3D Face Analysis Project

By Jia Guo and Jiankang Deng

License

The code of InsightFace is released under the MIT License. There is no limitation for both acadmic and commercial usage.

The training data containing the annotation (and the models trained with these data) are available for non-commercial research purposes only.

ArcFace Video Demo

ArcFace Demo

Please click the image to watch the Youtube video. For Bilibili users, click here.

Recent Update

**``` 2020-04-27


**```
2020.02.21
```**: Instant discussion group created on QQ with group-id: 711302608. For English developers, see install tutorial [here](https://github.com/deepinsight/insightface/issues/1069).

**```
2020.02.16
```**: RetinaFace now can detect faces with mask, for anti-CoVID19, see detail [here](https://github.com/deepinsight/insightface/tree/master/RetinaFaceAntiCov)

**```
2019.08.10
```**: We achieved 2nd place at [WIDER Face Detection Challenge 2019](http://wider-challenge.org/2019.html).

**```
2019.05.30
```**: [Presentation at cvmart](https://pan.baidu.com/s/1v9fFHBJ8Q9Kl9Z6GwhbY6A)

**```
2019.04.30
```**: Our Face detector ([RetinaFace](https://github.com/deepinsight/insightface/tree/master/RetinaFace)) obtains state-of-the-art results on [the WiderFace dataset](http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html).

**```
2019.04.14
```**: We will launch a [Light-weight Face Recognition challenge/workshop](https://github.com/deepinsight/insightface/tree/master/iccv19-challenge) on ICCV 2019.

**```
2019.04.04
```**: Arcface achieved state-of-the-art performance (7/109) on the NIST Face Recognition Vendor Test (FRVT) (1:1 verification)[report](https://www.nist.gov/sites/default/files/documents/2019/04/04/frvt_report_2019_04_04.pdf) (name: Imperial-000 and Imperial-001). Our solution is based on [MS1MV2+DeepGlintAsian, ResNet100, ArcFace loss].

**```
2019.02.08
```**: Please check [https://github.com/deepinsight/insightface/tree/master/recognition](https://github.com/deepinsight/insightface/tree/master/recognition) for our parallel training code which can easily and efficiently support one million identities on a single machine (8\* 1080ti).

**```
2018.12.13
```**: Inference acceleration [TVM-Benchmark](https://github.com/deepinsight/insightface/wiki/TVM-Benchmark).

**```
2018.10.28
```**: Light-weight attribute model [Gender-Age](https://github.com/deepinsight/insightface/tree/master/gender-age). About 1MB, 10ms on single CPU core. Gender accuracy 96% on validation set and 4.1 age MAE.

**```
2018.10.16
```**: We achieved state-of-the-art performance on [Trillionpairs](http://trillionpairs.deepglint.com/results) (name: nttstar) and [IQIYI\_VID](http://challenge.ai.iqiyi.com/detail?raceId=5afc36639689443e8f815f9e) (name: WitcheR).

## Contents

[Deep Face Recognition](https://github.com/deepinsight/insightface/blob/master/#deep-face-recognition)- [Introduction](https://github.com/deepinsight/insightface/blob/master/#introduction)- [Training Data](https://github.com/deepinsight/insightface/blob/master/#training-data)- [Train](https://github.com/deepinsight/insightface/blob/master/#train)- [Pretrained Models](https://github.com/deepinsight/insightface/blob/master/#pretrained-models)- [Verification Results On Combined Margin](https://github.com/deepinsight/insightface/blob/master/#verification-results-on-combined-margin)- [Test on MegaFace](https://github.com/deepinsight/insightface/blob/master/#test-on-megaface)- [512-D Feature Embedding](https://github.com/deepinsight/insightface/blob/master/#512-d-feature-embedding)- [Third-party Re-implementation](https://github.com/deepinsight/insightface/blob/master/#third-party-re-implementation)

[Face Alignment](https://github.com/deepinsight/insightface/blob/master/#face-alignment)

[Face Detection](https://github.com/deepinsight/insightface/blob/master/#face-detection)

[Citation](https://github.com/deepinsight/insightface/blob/master/#citation)

[Contact](https://github.com/deepinsight/insightface/blob/master/#contact)

## Deep Face Recognition

### Introduction

In this repository, we provide training data, network settings and loss designs for deep face recognition. The training data includes the normalised MS1M, VGG2 and CASIA-Webface datasets, which were already packed in MXNet binary format. The network backbones include ResNet, MobilefaceNet, MobileNet, InceptionResNet\_v2, DenseNet, DPN. The loss functions include Softmax, SphereFace, CosineFace, ArcFace and Triplet (Euclidean/Angular) Loss.

![margin penalty for target logit](https://github.com/deepinsight/insightface/raw/master/resources/arcface.png)

Our method, ArcFace, was initially described in an [arXiv technical report](https://arxiv.org/abs/1801.07698). By using this repository, you can simply achieve LFW 99.80%+ and Megaface 98%+ by a single model. This repository can help researcher/engineer to develop deep face recognition algorithms quickly by only two steps: download the binary dataset and run the training script.

### Training Data

All face images are aligned by [MTCNN](https://kpzhang93.github.io/MTCNN_face_detection_alignment/index.html) and cropped to 112x112:

Please check [Dataset-Zoo](https://github.com/deepinsight/insightface/wiki/Dataset-Zoo) for detail information and dataset downloading.

- Please check _src/data/face2rec2.py_ on how to build a binary face dataset. Any public available _MTCNN_ can be used to align the faces, and the performance should not change. We will improve the face normalisation step by full pose alignment methods recently.

### Train

1. Install 

MXNet

 with GPU support (Python 2.7).

pip install mxnet-cu90


1. Clone the InsightFace repository. We call the directory insightface as 
_```
INSIGHTFACE\_ROOT
```_.

git clone --recursive https://github.com/deepinsight/insightface.git


1. Download the training set (

MS1M-Arcface

) and place it in 
_```
$INSIGHTFACE\_ROOT/datasets/
```_. Each training dataset includes at least following 6 files:

faces_emore/ train.idx train.rec property lfw.bin cfp_fp.bin agedb_30.bin


The first three files are the training dataset while the last three files are verification sets.

1. Train deep face recognition models. In this part, we assume you are in the directory 
_```
$INSIGHTFACE\_ROOT/recognition/
```_.

Shell export MXNET_CPU_WORKER_NTHREADS=24 export MXNET_ENGINE_TYPE=ThreadedEnginePerDevice


Place and edit config file:

Shell cp sample_config.py config.py vim config.py # edit dataset path etc..


We give some examples below. Our experiments were conducted on the Tesla P40 GPU.

(1). Train ArcFace with LResNet100E-IR.

CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train.py --network r100 --loss arcface --dataset emore


It will output verification results of _LFW_, _CFP-FP_ and _AgeDB-30_ every 2000 batches. You can check all options in _config.py_. This model can achieve _LFW 99.80+_ and _MegaFace 98.3%+_.

(2). Train CosineFace with LResNet50E-IR.

CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train.py --network r50 --loss cosface --dataset emore


(3). Train Softmax with LMobileNet-GAP.

CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train.py --network m1 --loss softmax --dataset emore


(4). Fine-turn the above Softmax model with Triplet loss.

CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train.py --network m1 --loss triplet --lr 0.005 --pretrained ./models/m1-softmax-emore,1


1. Verification results.

_LResNet100E-IR_ network trained on _MS1M-Arcface_ dataset with ArcFace loss:

| Method | LFW(%) | CFP-FP(%) | AgeDB-30(%) |  
| ------- | ------ | --------- | ----------- |  
| Ours | 99.80+ | 98.0+ | 98.20+ |

### Pretrained Models

You can use

$INSIGHTFACE/src/eval/verification.py

 to test all the pre-trained models.

**Please check [Model-Zoo](https://github.com/deepinsight/insightface/wiki/Model-Zoo) for more pretrained models.**

### Verification Results on Combined Margin

A combined margin method was proposed as a function of target logits value and original

θ

:

COM(θ) = cos(m_1*θ+m_2) - m_3


For training with

m1=1.0, m2=0.3, m3=0.2

, run following command:

CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train_softmax.py --network r100 --loss combined --dataset emore


Results by using

MS1M-IBUG(MS1M-V1)


| Method | m1 | m2 | m3 | LFW | CFP-FP | AgeDB-30 | | ---------------- | ---- | ---- | ---- | ----- | ------ | -------- | | W&F Norm Softmax | 1 | 0 | 0 | 99.28 | 88.50 | 95.13 | | SphereFace | 1.5 | 0 | 0 | 99.76 | 94.17 | 97.30 | | CosineFace | 1 | 0 | 0.35 | 99.80 | 94.4 | 97.91 | | ArcFace | 1 | 0.5 | 0 | 99.83 | 94.04 | 98.08 | | Combined Margin | 1.2 | 0.4 | 0 | 99.80 | 94.08 | 98.05 | | Combined Margin | 1.1 | 0 | 0.35 | 99.81 | 94.50 | 98.08 | | Combined Margin | 1 | 0.3 | 0.2 | 99.83 | 94.51 | 98.13 | | Combined Margin | 0.9 | 0.4 | 0.15 | 99.83 | 94.20 | 98.16 |

### Test on MegaFace

Please check 
_```
$INSIGHTFACE\_ROOT/Evaluation/megaface/
```_ to evaluate the model accuracy on Megaface. All aligned images were already provided.

### 512-D Feature Embedding

In this part, we assume you are in the directory 
_```
$INSIGHTFACE\_ROOT/deploy/
```_. The input face image should be generally centre cropped. We use _RNet+ONet_ of _MTCNN_ to further align the image before sending it to the feature embedding network.

1. Prepare a pre-trained model.
2. Put the model under 
_```
$INSIGHTFACE\_ROOT/models/
```_. For example, 
_```
$INSIGHTFACE\_ROOT/models/model-r100-ii
```_.
3. Run the test script 
_```
$INSIGHTFACE\_ROOT/deploy/test.py
```_.

For single cropped face image(112x112), total inference time is only 17ms on our testing server(Intel E5-2660 @ 2.00GHz, Tesla M40, _LResNet34E-IR_).

### Third-party Re-implementation

- TensorFlow: [InsightFace\_TF](https://github.com/auroua/InsightFace_TF)
- TensorFlow: [tf-insightface](https://github.com/AIInAi/tf-insightface)
- TensorFlow:[insightface](https://github.com/Fei-Wang/insightface)
- PyTorch: [InsightFace\_Pytorch](https://github.com/TreB1eN/InsightFace_Pytorch)
- PyTorch: [arcface-pytorch](https://github.com/ronghuaiyang/arcface-pytorch)
- Caffe: [arcface-caffe](https://github.com/xialuxi/arcface-caffe)
- Caffe: [CombinedMargin-caffe](https://github.com/gehaocool/CombinedMargin-caffe)
- Tensorflow: [InsightFace-tensorflow](https://github.com/luckycallor/InsightFace-tensorflow)
- TensorRT: [wang-xinyu/tensorrtx](https://github.com/wang-xinyu/tensorrtx)

## Face Alignment

Please check the [Menpo](https://github.com/jiankangdeng/MenpoBenchmark) Benchmark and [Dense U-Net](https://github.com/deepinsight/insightface/tree/master/alignment) for more details.

## Face Detection

Please check [RetinaFace](https://github.com/deepinsight/insightface/tree/master/RetinaFace) for more details.

## Citation

If you find _InsightFace_ useful in your research, please consider to cite the following related papers:

@inproceedings{deng2019retinaface, title={RetinaFace: Single-stage Dense Face Localisation in the Wild}, author={Deng, Jiankang and Guo, Jia and Yuxiang, Zhou and Jinke Yu and Irene Kotsia and Zafeiriou, Stefanos}, booktitle={arxiv}, year={2019} } @inproceedings{guo2018stacked, title={Stacked Dense U-Nets with Dual Transformers for Robust Face Alignment}, author={Guo, Jia and Deng, Jiankang and Xue, Niannan and Zafeiriou, Stefanos}, booktitle={BMVC}, year={2018} } @article{deng2018menpo, title={The Menpo benchmark for multi-pose 2D and 3D facial landmark localisation and tracking}, author={Deng, Jiankang and Roussos, Anastasios and Chrysos, Grigorios and Ververas, Evangelos and Kotsia, Irene and Shen, Jie and Zafeiriou, Stefanos}, journal={IJCV}, year={2018} } @inproceedings{deng2018arcface, title={ArcFace: Additive Angular Margin Loss for Deep Face Recognition}, author={Deng, Jiankang and Guo, Jia and Niannan, Xue and Zafeiriou, Stefanos}, booktitle={CVPR}, year={2019} }


## Contact

Jia Guo Jiankang Deng ```

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.