fairseq-gec

by zhawe01

zhawe01 / fairseq-gec

Source code for paper: Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Arch...

164 Stars 62 Forks Last release: Not found Other 6 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

Introduction

Source code for the paper: Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data Authors: Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, Jingming Liu Arxiv url: https://arxiv.org/abs/1903.00138 Comments: Accepted by NAACL 2019 (oral)

Dependecies

  • PyTorch version >= 1.0.0
  • Python version >= 3.6

Downloads

  • Download CoNLL-2014 evaluation scripts
cd gec_scripts/
sh download.sh
  • Download pre-processed data & pre-trained models

pre-trained model: (Google Drive/Baidu Pan) - url1: https://drive.google.com/file/d/1zewifHUUwvqc2F-MfDRsZFio6PlSzx2c/view?usp=sharing - url2: https://pan.baidu.com/s/1hCwQeNFjng0NiViJq6fg (code: mxrf)

pre-processed data: (Google Drive)(train/valid/test) - url: https://drive.google.com/open?id=17s-TZiM6ilQ-SHklxTUun2Jdgg8B9zS3

Train with the pre-trained model

cd fairseq-gec
pip install --editable
sh train.sh \${device_id} \${experiment_name}

Train without the pre-trained model

Modify train.sh to train without the pre-trained model

  • delete parameter "--pretrained-model"
  • change the value of "--max-epoch" to 15 (more epochs are needed without pre-trained parameters)

Evaluate on the CoNLL-2014 test dataset

sh g.sh \${device_id} \${experiment_name}

Get pre-trained models from scratch

We have public our pre-trained models as mentioned in the downloads part. We list the steps here, in case someone want to get the pre-trained models from scratch.

1. # prepare target sentences using one billion benchmark dataset
2. sh noise.sh # generate the noised source sentences 
3. sh preprocess_noise_data.sh # preprocess data
4. sh pretrain.sh 0,1 _pretrain # pretrain 

Acknowledgments

Our code was modified from fairseq codebase. We use the same license as fairseq(-py).

Citation

Please cite as:

@article{zhao2019improving,
  title={Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data},
    author={Zhao, Wei and Wang, Liang and Shen, Kewei and Jia, Ruoyu and Liu, Jingming},
      journal={arXiv preprint arXiv:1903.00138},
        year={2019}
}

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.