Skip to content
/ VQG Public

Deep neural network for generating questions about a given image.

License

Notifications You must be signed in to change notification settings

chennaveh/VQG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Visual Question Generating

Generating natural questions about images. This project task is to generate questions which are more likely to be asked by human when shown an image or a scheme. Most of the questions are not describing the visual objects but inferring deeper concepts and events which canbe used to start a conversation in a human-machine interaction.

Results

1st example

2nd example

3rd example

4th example

5th example

Explanation of Jupyter Notebooks:

Download datasets.ipynb A Jupyter Notebook that downloads images and questions from the dataset link here The dataset is orgenised by the source of the image: Bing, MSCOCO, or Flickr, then by type of dataset: train, val and test. This notebook is mainly inspired by this repo here. After fully running this Notebook you can procceed to the next notebook VGQ-PyTorch.ipynb to train and see some results.

VQG-PyTorch.ipynb This Jupyter Notebook can run in 2 mode: train, predict. Feel free to change the to_train variable in the second cell to False if you no longer need to train.

This notebook has mainely three parts:

  1. Building the train, validation, test set by encoding the images and join them with the question.
  2. Training the GRNN with multiple variables.
  3. Predict and show the results using beam search.

References

[1] Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He and Lucy VanderwendeGenerating Natural Questions About an Image

Releases

No releases published

Packages

No packages published