by yoavg

Assessing syntactic abilities of BERT

135 Stars 18 Forks Last release: Not found Apache License 2.0 3 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:


Assesing the syntactic abilities of BERT.


Evaluate Google's BERT-Base and BERT-Large models on the syntactic agreement datasets from Linzen, Goldberg and Dupoux 2016 and Marvin and Linzen 2018 and Gulordava et al 2018.

This is quite messy, as I hacked it together between things here and there. But I also believe it is accurate. This lists the data files and shows how to run the evaluation. For more details and results, see the arxiv report.

Data Files

Data taken from the github repos of Linzen, Goldberg and Dupoux (LGD), Marvin and Linzen (ML), and Gulordava et al.

| File | Description | |---|---| |

| stimuli from Marvin and Linzen. I dumped it from the pickle files of ML | |
| from LGD, used for verb inflections (wiki.vocab) | |
| processed data from LGD | |
| data from Gulordava et al ( |

is created by
gunzip agr_50_mostcommon_10K.tsv.gz
python > lgd_dataset.tsv

Obtaining the results

pip install pytorch_pretrained_bert

python > results/lgd_results_large.txt python base > results/lgd_results_base.txt python marvin > results/marvin_results_large.txt python marvin base > results/marvin_results_base.txt python gul > results/gulordava_results_large.txt python gul base > results/gulordava_results_base.txt

Generating tables (for the PDF)


We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.