ManyModalQA: Modality Disambiguation and QA over Diverse Inputs
Authors: Darryl Hannan, Akshay Jain, Mohit Bansal7879-7886
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We collect our data by scraping Wikipedia and then utilize crowdsourcing to collect question-answer pairs. Our questions are ambiguous, in that the modality that contains the answer is not easily determined based solely upon the question. To demonstrate this ambiguity, we construct a modality selector (or disambiguator) network, and this model gets substantially lower accuracy on our challenge set, compared to existing datasets, indicating that our questions are more ambiguous. By analyzing this model, we investigate which words in the question are indicative of the modality. Next, we construct a simple baseline MANYMODALQA model, which, based on the prediction from the modality selector, fires a corresponding pre-trained state-of-the-art unimodal QA model. We focus on providing the community with a new manymodal evaluation set and only provide a fine-tuning set, with the expectation that existing datasets and approaches will be transferred for most of the training, to encourage low-resource generalization without large, monolithic training sets for each new task. There is a significant gap between our baseline models and human performance; therefore, we hope that this challenge encourages research in end-to-end modality disambiguation and multimodal QA models, as well as transfer learning. |
| Researcher Affiliation | Academia | Darryl Hannan, Akshay Jain, Mohit Bansal University of North Carolina at Chapel Hill {dhannan, akshayj, mbansal}@cs.unc.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states: 'We will include some data samples in our ar Xiv supplementary and will release the full challenge set on our public website.' This refers to data release, not explicit source code for the methodology described. |
| Open Datasets | Yes | We collect our challenge data from English Wikipedia, which contains 6 million articles, each containing many modalities, including text, tables, images, video, audio, and more. Furthermore, all of this content is publicly available and easy to access. ... We train the model on a version of SQu AD v1.1 that we modified to better match our data (Rajpurkar et al. 2016). ... Table-based QA Model To process our tables, we use Stanford s SEMPRE framework, trained on Wiki Table Questions (Pasupat and Liang 2015) and fine-tuned on our data... Image-based QA Model Our image QA model uses the bottom-up attention architecture (Anderson et al. 2018) and is trained using VQA v2 (Goyal et al. 2017). |
| Dataset Splits | Yes | MANYMODALQA contains 10,190 questions: 2,873 image, 3,789 text, and 3,528 table; a 20/30/50% fine-tuning/dev/test split. ... We make a training set with 1,800 examples, a dev set with 5,850, and a test set with 7,350. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions models and frameworks (e.g., BERT, SEMPRE, ELMo, RoBERTa, LXMERT, GloVe) but does not provide specific version numbers for these or any other software dependencies required for replication. |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific optimizer settings. |