This repo contains the streaming Transformer of our work
On the Comparison of Popular End-to-End Models for Large Scale Speech Recognition, which is based on ESPnet0.6.0. The streaming Transformer includes a streaming encoder, either chunk-based or look-ahead based, and a trigger-attention based decoder.
We will release following models and show reproducible results on Librispeech
Streamingtransformer-chunk32 with ESPnet Conv2d Encoder. (https://drive.google.com/file/d/1LSBYvK50Jxvw_GeiYrPwRtJ0DsKU6zL/view?usp=sharing)
Streaming_transformer-chunk32 with VGG Encoder. (https://drive.google.com/file/d/12P6TsxtOCxrHezqgtk0USjSKBsYHIe7K/view?usp=sharing)
Streamingtransformer-lookahead with ESPnet Conv2d Encoder. (https://drive.google.com/file/d/1YJQaofzsk9KsL2W9Zb42sGLRRIKRs9X/view?usp=sharing)
Streamingtransformer-lookahead with VGG Encoder. (https://drive.google.com/file/d/1LO0pPxU5XJffqJMgtx4W4IL-Aih5m0M/view?usp=sharing)
| Model | test-clean | test-other |latency |size | | -------- | -----: | :----: |:----: |:----: | | streamingtransformer-chunk32-conv2d | 2.8 | 7.5 | 640ms | 78M | | streamingtransformer-chunk32-vgg | 2.8 | 7.0| 640ms | 78M | | streamingtransformer-lookahead2-conv2d | 3.0 | 8.6| 1230ms | 78M | | streamingtransformer-lookahead2-vgg | 2.8 | 7.5 | 1230ms | 78M |
Our installation follow the installation process of ESPnet
export PATH=$CUDAROOT/bin:$PATH export LD_LIBRARY_PATH=$CUDAROOT/lib64:$LD_LIBRARY_PATH export CFLAGS="-I$CUDAROOT/include $CFLAGS" export CUDA_HOME=$CUDAROOT export CUDA_PATH=$CUDAROOT`
cd tools make -j 10
cd egs/librispeech/asr1 ./run.sh
By default. the processed data will stored in the current directory. You can change the path by editing the scripts.
To train a TA based streaming Transformer, the alignments between CTC paths and transcriptions are required. In our work, we apply Viterbi decoding using the offline Transformer model.
cd egs/librispeech/asr1 ./viterbi_decode.sh /path/to/model
Here, we train a chunk-based streaming Transformer which is initialized with an offline Transformer provided by ESPnet. Set
conf/train_streaming_transformer.yamlto the path of your offline model.
cd egs/librispeech/asr1 ./train.sh
If you want to train a look-ahead based streaming Transformer, set
chunkto False and change the
left-window, right-window, dec-left-window, dec-right-windowarguments. The training log is written in
exp/streaming_transformer/train.log. You can monitor the output through
tail -f exp/streaming_transformer/train.log
Execute the following script with to decoding on testclean and testother sets
./decode.sh num_of_gpu job_per_gpu
Regarding the offline Transformer model, Please visit here