A PyTorch implementation of Listen, Attend and Spell (LAS), an End-to-End ASR framework.
A PyTorch implementation of Listen, Attend and Spell (LAS) [1], an end-to-end automatic speech recognition framework, which directly converts acoustic features to character sequence using only one nueral network.
pip install -r requirements.txt
cd tools; make KALDI=/path/to/kaldi
egs/aishell/run.sh, download aishell dataset for free.
$ cd egs/aishelland modify aishell data path to your path in
run.sh.
$ bash run.sh, that's all!
You can change hyper-parameter by
$ bash run.sh --parameter_name parameter_value, egs,
$ bash run.sh --stage 3. See parameter name in
egs/aishell/run.shbefore
. utils/parse_options.sh.
$ cd egs/aishell/ $ . ./path.sh
Train
bash $ train.py -hDecode
bash $ recognize.py -h
Workflow of
egs/aishell/run.sh: - Stage 0: Data Preparation - Stage 1: Feature Generation - Stage 2: Dictionary and Json Data Preparation - Stage 3: Network Training - Stage 4: Decoding
If you want to visualize your loss, you can use
visdomto do that: - Open a new terminal in your remote server (recommend tmux) and run
$ visdom. - Open a new terminal and run
$ bash run.sh --visdom 1 --visdom_id ""or
$ train.py ... --visdom 1 --vidsdom_id "". - Open your browser and type
:8097, egs,
127.0.0.1:8097. - In visdom website, chose in
Environmentto see your loss.
| Model | CER | Config | | :---: | :-: | :----: | | LSTMP | 9.85| 4x(1024-512) | | Listen, Attend and Spell | 13.2 | See egs/aishell/run.sh |
[1] W. Chan, N. Jaitly, Q. Le, and O. Vinyals, “Listen, attend and spell: A neural network for large vocabulary conversational speech recognition,” in ICASSP 2016. (https://arxiv.org/abs/1508.01211v2)