Need help with LGSC-for-FAS?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

181 Stars 41 Forks MIT License 31 Commits 25 Opened issues


Learning Generalized Spoof Cues for FaceAnti-spoofing

Services available


Need anything else?

Contributors list

No Data


This repository contains code of "Learning Generalized Spoof Cues for FaceAnti-spoofing (LGSC)", which reformulate face anti-spoofing (FAS) as an anomaly detection problem and learn the discriminative spoof cues via a residual-learning framework.



FaceForensics Benchmark

Based on the LGSC, we achieved 1st Place on FaceForensics benchmark FaceforensicsBenchmark

Dataset and Preprocessing

We use two different compression rates: c23(medium compression) and c40(high compression) videos in FaceForensics++ dataset as our training data. As a preprocessing step we extract crop faces frame from raw videos and totally we get 0.96 million face frames, which contrains 0.16 million pristine frames and 0.8 million manipulated frames. Further, we balanced the proportion of positive and negative samples during training.

Technology Detail

Our deep network is implemented on the platform of PaddlePaddle(dygraph).

  • Download and preproces the dataset
    ├── manipulated_sequences
    │   └── DeepFakeDetection
    │   └── Deepfakes
    │   └── Face2Face
    │   └── FaceSwap
    │   └── NeuralTextures
    ├── original_sequences
    │   └── actors
    │   └── youtube
    │       └── c23
    │       └── c40
    │           └── images
    ├── train_add_train.txt
    ├── train_val_train.txt
    └── faceforensics_benchmark.txt
  • Download and convert imagenet pretrained model ``` python

./pretrained ├── resnet18-5c106cde.pth └── resnet18-torch.pdparams

+ Train
+ Test
python ```


    title={Learning Generalized Spoof Cues for Face Anti-spoofing},
    author={Haocheng Feng and Zhibin Hong and Haixiao Yue and Yang Chen and Keyao Wang and 
    Junyu Han and Jingtuo Liu and Errui Ding},

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.