Need help with simple-effective-text-matching-pytorch?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

235 Stars 44 Forks Apache License 2.0 5 Commits 2 Opened issues


A pytorch implementation of the ACL2019 paper "Simple and Effective Text Matching with Richer Alignment Features".

Services available


Need anything else?

Contributors list


This is a pytorch implementation of the ACL 2019 paper "Simple and Effective Text Matching with Richer Alignment Features". The original Tensorflow implementation:

Quick Links

Simple and Effective Text Matching

RE2 is a fast and strong neural architecture for general purpose text matching applications. In a text matching task, a model takes two text sequences as input and predicts their relationship. This method aims to explore what is sufficient for strong performance in these tasks. It simplifies many slow components which are previously considered as core building blocks in text matching, while keeping three key features directly available for inter-sequence alignment: original point-wise features, previous aligned features, and contextual features.

RE2 achieves performance on par with the state of the art on four benchmark datasets: SNLI, SciTail, Quora and WikiQA, across tasks of natural language inference, paraphrase identification and answer selection with no or few task-specific adaptations. It has at least 6 times faster inference speed compared to similarly performed models.

The following table lists major experiment results. The paper reports the average and standard deviation of 10 runs. Inference time (in seconds) is measured by processing a batch of 8 pairs of length 20 on Intel i7 CPUs. The computation time of POS features used by CSRAN and DIIN is not included.

|Model|SNLI|SciTail|Quora|WikiQA|Inference Time| |---|---|---|---|---|---| |BiMPM|86.9|-|88.2|0.731|0.05| |ESIM|88.0|70.6|-|-|-| |DIIN|88.0|-|89.1|-|1.79| |CSRAN|88.7|86.7|89.2|-|0.28| |RE2|88.9±0.1|86.0±0.6|89.2±0.2|0.7618 ±0.0040|0.03~0.05|

Refer to the paper for more details of the components and experiment results.


  • install python >= 3.6 and pip
  • pip install -r requirements.txt
  • install PyTorch
  • Download GloVe word vectors (glove.840B.300d) to

Data used in the paper are prepared as follows:


  • Download and unzip SNLI (pre-processed by Tay et al.) to
  • Unzip all zip files in the "data/orig/SNLI" folder. (
    cd data/orig/SNLI && gunzip *.gz
  • cd data && python


  • Download and unzip SciTail dataset to
  • cd data && python


  • Download and unzip Quora dataset (pre-processed by Wang et al.) to
  • cd data && python


  • Download and unzip WikiQA to
  • cd data && python
  • Download and unzip evaluation scripts. Use the
    make -B
    command to compile the source files in
    . Move the binary file "trec_eval" to


To train a new text matching model, run the following command:

python $config_file.json5

Example configuration files are provided in

  • configs/main.json5
    : replicate the main experiment result in the paper.
  • configs/robustness.json5
    : robustness checks
  • configs/ablation.json5
    : ablation study

The instructions to write your own configuration files:

        name: 'exp1', // name of your experiment, can be the same across different data
        __parents__: [
            'default', // always put the default on top
            'data/quora', // data specific configurations in `configs/data`
            // 'debug', // use "debug" to quick debug your code  
        __repeat__: 5,  // how may repetitions you want
        blocks: 3, // other configurations for this experiment 
    // multiple configurations are executed sequentially
        name: 'exp2', // results under the same name will be overwritten
        __parents__: [
        __repeat__: 5,  
        blocks: 4, 

To check the configurations only, use

python $config_file.json5 --dry

To evaluate an existed model, use

python $model_path $data_file
, here's an example:
python models/snli/benchmark/ data/snli/train.txt 
python models/snli/benchmark/ data/snli/test.txt 

Note that multi-GPU training is not yet supported in the pytorch implementation. A single 16G GPU is sufficient for training when blocks < 5 with hidden size 200 and batch size 512. All the results reported in the paper except the robustness checks can be reproduced with a single 16G GPU.


Please cite the ACL paper if you use RE2 in your work:

  title={Simple and Effective Text Matching with Richer Alignment Features},
  author={Yang, Runqi and Zhang, Jianhai and Gao, Xing and Ji, Feng and Chen, Haiqing},
  booktitle={Association for Computational Linguistics (ACL)},


This project is under Apache License 2.0.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.