Exploring Models and Data for Image Question Answering
Authors: Mengye Ren, Ryan Kiros, Richard Zemel
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experimental Results |
| Researcher Affiliation | Academia | Mengye Ren1, Ryan Kiros1, Richard S. Zemel1,2 University of Toronto1 Canadian Institute for Advanced Research2 {mren, rkiros, zemel}@cs.toronto.edu |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We release the complete details of the models at https://github.com/renmengye/ imageqa-public. |
| Open Datasets | Yes | COCO-QA dataset can be downloaded at http://www.cs.toronto.edu/ mren/ imageqa/data/cocoqa |
| Dataset Splits | Yes | Table 1: COCO-QA question type break-down CATEGORY TRAIN % TEST % OBJECT 54992 69.84% 27206 69.85% NUMBER 5885 7.47% 2755 7.07% COLOR 13059 16.59% 6509 16.71% LOCATION 4800 6.10% 2478 6.36% TOTAL 78736 100.00% 38948 100.00% |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types) used for running its experiments. |
| Software Dependencies | No | The paper mentions software like 'Stanford parser', 'Word Net', and 'NLTK software package', but does not provide specific version numbers for these dependencies. |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings. |