Need help with Tree-Transformer?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

171 Stars 20 Forks 16 Commits 4 Opened issues


Implementation of the paper Tree Transformer

Services available


Need anything else?

Contributors list

# 270,449
3 commits

Tree Transformer

This is the official implementation of the paper Tree Transformer: Integrating Tree Structures into Self-Attention. If you use this code or our results in your research, we'd appreciate you cite our paper as following:

  title={Tree Transformer: Integrating Tree Structures into Self-Attention},
  author={Yau-Shian Wang and Hung-Yi Lee and Yun-Nung Chen},
  journal={arXiv preprint arXiv:1909.06639},


  • python3
  • pytorch 1.0

We use BERT tokenizer from PyTorch-Transformers to tokenize words. Please install PyTorch-Transformers following the instructions of the repository.


For grammar induction training:

python3 -train -model_dir [model_dir] -num_step 60000

The default setting achieves F1 of approximatedly 49.5 on WSJ test set. The training file 'data/train.txt' includes all WSJ data except 'WSJ22 and WSJ23'.


For grammar induction testing:

python3 -test -model_dir [model_dir]

The code creates a result directory named modeldir. The result directory includes 'bracket.json' and 'tree.txt'. File 'bracket.json' contains the brackets of trees outputted from the model and they can be used for evaluating F1. The ground truth brackets of testing data can be obtained by using code of on-lstm. File 'tree.txt' contains the parse trees. The default testing file 'data/test.txt' contains the tests of wsj23.



[email protected]

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.