Need help with EmbedKGQA?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

235 Stars 62 Forks Apache License 2.0 46 Commits 11 Opened issues


ACL 2020: Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings

Services available


Need anything else?

Contributors list

# 306,145
35 commits
# 400,587
1 commit


This is the code for our ACL 2020 paper Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings (Slides)

UPDATE: Code for relation matching has been added. Please see the Readme for details on how to use it.


Data and pre-trained models

In order to run the code, first download and from here. Unzip these files in the main directory.

UPDATE: There was an issue with the WebQSP test set containing 43 fewer questions (issue #86). This has been fixed, and the file

should be placed in the directory


Change to directory ./KGQA/LSTM. Following is an example command to run the QA training code

python3 --mode train --relation_dim 200 --hidden_dim 256 \
--gpu 2 --freeze 0 --batch_size 128 --validate_every 5 --hops 2 --lr 0.0005 --entdrop 0.1 --reldrop 0.2  --scoredrop 0.2 \
--decay 1.0 --model ComplEx --patience 5 --ls 0.0 --kg_type half


Change to directory ./KGQA/RoBERTa. Following is an example command to run the QA training code

python3 --mode train --relation_dim 200 --do_batch_norm 1 \
--gpu 2 --freeze 1 --batch_size 16 --validate_every 10 --hops webqsp_half --lr 0.00002 --entdrop 0.0 --reldrop 0.0 --scoredrop 0.0 \
--decay 1.0 --model ComplEx --patience 20 --ls 0.05 --l3_reg 0.001 --nb_epochs 200 --outfile half_fbwq
Note: This will run the code in vanilla setting without relation matching, relation matching will have to be done separately.

Also, please note that this implementation uses embeddings created through libkge ( This is a very helpful library and I would suggest that you train embeddings through it since it supports sparse embeddings + shared negative sampling to speed up learning for large KGs like Freebase.

Dataset creation


KG dataset

There are 2 datasets: MetaQAfull and MetaQAhalf. Full dataset contains the original kb.txt as train.txt with duplicate triples removed. Half contains only 50% of the triples (randomly selected without replacement).

There are some lines like 'entity NOOP entity' in the train.txt for half dataset. This is because when removing the triples, all triples for that entity were removed, hence any KG embedding implementation would not find any embedding vector for them using the train.txt file. By including such 'NOOP' triples we are not including any additional information regarding them from the KG, it is there just so that we can directly use any embedding implementation to generate some random vector for them.

QA Dataset

There are 5 files for each dataset (1, 2 and 3 hop) - qatrain{n}hoptrain.txt - qatrain{n}hoptrainhalf.txt - qatrain{n}hoptrainold.txt - qadev{n}hop.txt - qatest_{n}hop.txt

Out of these, qadev, qatest and qatrain{n}hop_old are exactly the same as the MetaQA original dev, test and train files respectively.

For qatrain{n}hoptrain and qatrain{n}hoptrainhalf, we have added triple (h, r, t) in the form of (head entity, question, answer). This is to prevent the model from 'forgetting' the entity embeddings when it is training the QA model using the QA dataset. qatrain.txt contains all triples, while qatrainhalf.txt contains only triples from MetaQA_half.


KG dataset

There are 2 datasets: fbwqfull and fbwqhalf

Creating fbwq_full: We restrict the KB to be a subset of Freebase which contains all facts that are within 2-hops of any entity mentioned in the questions of WebQuestionsSP. We further prune it to contain only those relations that are mentioned in the dataset. This smaller KB has 1.8 million entities and 5.7 million triples.

Creating fbwqhalf: We randomly sample 50% of the edges from fbwqfull.

QA Dataset

Same as the original WebQuestionsSP QA dataset.

How to cite

If you used our work or found it helpful, please use the following citation:

  title={Improving multi-hop question answering over knowledge graphs using knowledge base embeddings},
  author={Saxena, Apoorv and Tripathi, Aditay and Talukdar, Partha},
  booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.