Strong baseline for visual question answering
This is a re-implementation of Vahid Kazemi and Ali Elqursh's paper Show, Ask, Attend, and Answer: A Strong Baseline For Visual Question Answering in PyTorch.
The paper shows that with a relatively simple model, using only common building blocks in Deep Learning, you can get better accuracies than the majority of previously published work on the popular VQA v1 dataset.
This repository is intended to provide a straightforward implementation of the paper for other researchers to build on. The results closely match the reported results, as the majority of details should be exactly the same as the paper. (Thanks to the authors for answering my questions about some details!) This implementation seems to consistently converge to about 0.1% better results – there are two main implementation differences:
A fully trained model (convergence shown below) is available for download.
Note that the model in my other VQA repo performs better than the model implemented here.
git clone https://github.com/Cyanogenoid/pytorch-vqa --recursive
qa_pathshould contain the files
test_pathshould contain the train, validation, and test
python preprocess-images.py python preprocess-vocab.py
python train.pyThis will alternate between one epoch of training on the train split and one epoch of validation on the validation split while printing the current training progress to stdout and saving logs in the
logsdirectory. The logs contain the name of the model, training statistics, contents of
config.py, model weights, evaluation information (per-question answer and accuracy), and question and answer vocabularies.