Need help with kaggle-quora-dup?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

132 Stars 52 Forks MIT License 11 Commits 2 Opened issues


Solution to Kaggle's Quora Duplicate Question Detection Competition

Services available


Need anything else?

Contributors list

# 141,735
10 commits

Solution to Kaggle's Quora Duplicate Question Detection Competition

The competition can be found via the link: I was ranked 23rd (top 1%) among 3307 teams with this solution. This is a relatively lightweight model considering the other top solutions.


  • Download the pre-trained word vectors, namely glove.840B.300d, from and put it into the project directory.
  • Download the train and test data from Create a folder named "data" and put them in.
  • Install all the packages in requirements.txt.


  • This code is written in Python 3.5 and tested on a machine with Intel i5-6300HQ processor and Nvidia GeForce GTX 950M. Keras is used with Tensorflow backend and GPU support.
  • First run and nonnlpfeature scripts. They may take an hour to finish.
  • Then run which may take around 5 hours to make 10 different predictions on the test set.
  • Finally, ensemble and postprocess the predictions by

Model Explanation

  • Questions are preprocessed such that the different forms of writing the same thing are tried to be unified. So, LSTM does not learn different things from these different interpretations.
  • Words which occur more than 100 times in the train set are collected. The rest is considered as rare words and replaced by the word "memento" which is my favorite movie from C. Nolan. Since "memento" is irrelevant to almost anything, it is absically a placeholder. How many of the rare words are common in the both pairs and how many of them are numeric are used as features. This whole process leads to better generalization in LSTM so that it cannot overfit particular pairs by just using these rare words.
  • The features mentioned above are merged with NLP and non-NLP features. As a result, 4+15+6=25 features are prepared for the network.
  • The train data is divided into 10 folds. In every run, one fold is kept as the validation set for early stopping. So, every run uses 1 fold different than the other for training which can contribute to the model variance. Since we are going to ensemble the models, increasing model variance reasonably is something we may want. I also did more 10fold runs with different model parameters for better ensebling during the competition.

Network Architecture

alt text


  • All the generated models are average ensembled.
  • Since the class inbalance is told to be different in the test set, predictions are adjusted regarding to the test set class ratio.
  • Postprocess method I explained in is used.

What made my model successful? BETTER GENERALIZATION

  • All the features are question order independent. When you swap the first and the second question, the feature matrix does not change. For example, instead of using question1frequency and question2frequency, I have used minfrequency and maxfrequency.
  • Feature values are bounded when necessary. For example, number of neighbors are set to 5 for everything above 5, because I did not want to overfit on a particular pair with specific number of neighbor 76 etc.
  • Features generated by LSTM is also question order independent. They share the same LSTM layer. After the LSTM layer, output of question1 and question2 merged with commutative operations which are square of difference and summation.
  • I think a good preprocessing on the questions also leads to better generalization.
  • Replacing the rare words with a placeholder before LSTM is another thing that I did for better generalization.
  • The neural network is not so big and has reasonable amount of dropouts and gaussian noises.
  • Different NN ppredictions are ensembled at the end.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.