RNN for Spoken Language Understanding
Note: I don't provide personal support for custom changes in the code. Only for the release. For people just starting, I recommend Treehouse for online-learning.
Based on the Interspeech '13 paper:
We also have a follow-up IEEE paper:
Grégoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu and Geoffrey Zweig - Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding
This code allows to get state-of-the-art results and a significant improvement (+1% in F1-score) with respect to the results presented in the paper.
In order to reproduce the results, make sure Theano is installed and the repository is in your
PYTHONPATH, e.g run the command
export PYTHONPATH=/path/where/is13/is:$PYTHONPATH. Then, run the following commands:
git clone [email protected]:mesnilgr/is13.git python is13/examples/elman-forward.py
For running the Jordan architecture:
import cPickle train, test, dicts = cPickle.load(open("atis.pkl"))
dictsis a python dictionnary that contains the mapping from the labels, the name entities (if existing) and the words to indexes used in
testlists. Refer to this tutorial for more details.
Running the following command can give you an idea of how the data has been preprocessed:
To download the intent labels, you may be intersted in this notebook.
Recurrent Neural Network Architectures for Spoken Language Understanding by Grégoire Mesnil is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Based on a work at https://github.com/mesnilgr/is13.