Implementation of "Teaching Machines to Read and Comprehend" proposed by Google DeepMind
This repository contains an implementation of the two models (the Deep LSTM and the Attentive Reader) described in Teaching Machines to Read and Comprehend by Karl Moritz Hermann and al., NIPS, 2015. This repository also contains an implementation of a Deep Bidirectional LSTM.
The three models implemented in this repository are:
deepmind_deep_lstmreproduces the experimental settings of the DeepMind paper for the LSTM reader
deepmind_attentive_readerreproduces the experimental settings of the DeepMind paper for the Attentive reader
deep_bidir_lstm_2x128implements a two-layer bidirectional LSTM reader
We trained the three models during 2 to 4 days on a Titan Black GPU. The following results were obtained:
|Deep Bidir LSTM||-||-||59.76||61.62|
|Deep LSTM Reader||55.0||57.0||46||47|
Here is an example of attention weights used by the attentive reader model on an example:
We recommend using Anaconda 2 and installing them with the following commands (where
piprefers to the
pipcommand from Anaconda):
pip install git+git://github.com/Theano/Theano.git pip install git+git://github.com/mila-udem/fuel.git pip install git+git://github.com/mila-udem/blocks.git -r https://raw.githubusercontent.com/mila-udem/blocks/master/requirements.txt
Anaconda also includes a Bokeh server, but you still need to install
blocks-extrasif you want to have the plot:
pip install git+git://github.com/mila-udem/blocks-extras.git
Set the environment variable
DATAPATHto the folder containing the DeepMind QA dataset. The training questions are expected to be in
cp deepmind-qa/* $DATAPATH/deepmind-qa/
This will copy our vocabulary list
vocab.txt, which contains a subset of all the words appearing in the dataset.
To train a model (see list of models at the beginning of this file), run:
Be careful to set your
THEANO_FLAGScorrectly! For instance you might want to use
THEANO_FLAGS=device=gpu0if you have a GPU (highly recommended!)
Teaching Machines to Read and Comprehend, by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman and Phil Blunsom, Neural Information Processing Systems, 2015.
We would like to thank the developers of Theano, Blocks and Fuel at MILA for their excellent work.
We thank Simon Lacoste-Julien from SIERRA team at INRIA, for providing us access to two Titan Black GPUs.