Need help with DeepMind-Teaching-Machines-to-Read-and-Comprehend?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

412 Stars 108 Forks MIT License 2 Commits 7 Opened issues


Implementation of "Teaching Machines to Read and Comprehend" proposed by Google DeepMind

Services available


Need anything else?

Contributors list

# 235,696
1 commit
# 194,358
1 commit

DeepMind : Teaching Machines to Read and Comprehend

This repository contains an implementation of the two models (the Deep LSTM and the Attentive Reader) described in Teaching Machines to Read and Comprehend by Karl Moritz Hermann and al., NIPS, 2015. This repository also contains an implementation of a Deep Bidirectional LSTM.

The three models implemented in this repository are:

  • deepmind_deep_lstm
    reproduces the experimental settings of the DeepMind paper for the LSTM reader
  • deepmind_attentive_reader
    reproduces the experimental settings of the DeepMind paper for the Attentive reader
  • deep_bidir_lstm_2x128
    implements a two-layer bidirectional LSTM reader

Our results

We trained the three models during 2 to 4 days on a Titan Black GPU. The following results were obtained:

DeepMind Us
Valid Test Valid Test
Attentive Reader 61.6 63.0 59.37 61.07
Deep Bidir LSTM - - 59.76 61.62
Deep LSTM Reader 55.0 57.0 46 47

Here is an example of attention weights used by the attentive reader model on an example:


Software dependencies:

  • Theano GPU computing library library
  • Blocks deep learning framework
  • Fuel data pipeline for Blocks

Optional dependencies:

  • Blocks Extras and a Bokeh server for the plot

We recommend using Anaconda 2 and installing them with the following commands (where

refers to the
command from Anaconda):
pip install git+git://
pip install git+git://
pip install git+git:// -r

Anaconda also includes a Bokeh server, but you still need to install

if you want to have the plot:
pip install git+git://

The corresponding dataset is provided by DeepMind but if the script does not work (or you are tired of waiting) you can check this preprocessed version of the dataset by Kyunghyun Cho.


Set the environment variable

to the folder containing the DeepMind QA dataset. The training questions are expected to be in


cp deepmind-qa/* $DATAPATH/deepmind-qa/

This will copy our vocabulary list

, which contains a subset of all the words appearing in the dataset.

To train a model (see list of models at the beginning of this file), run:

./ model_name

Be careful to set your

correctly! For instance you might want to use
if you have a GPU (highly recommended!)


Teaching Machines to Read and Comprehend, by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman and Phil Blunsom, Neural Information Processing Systems, 2015.


Thomas Mesnard

Alex Auvolat

Étienne Simon


We would like to thank the developers of Theano, Blocks and Fuel at MILA for their excellent work.

We thank Simon Lacoste-Julien from SIERRA team at INRIA, for providing us access to two Titan Black GPUs.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.