Introspective Distillation for Robust Question Answering
Authors: Yulei Niu, Hanwang Zhang
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on visual QA datasets VQA v2, VQA-CP, and reading comprehension dataset SQu AD demonstrate that our proposed Intro D maintains the competitive OOD performance compared to other debiasing methods, while sacrificing little or even achieving better ID performance compared to the non-debiasing ones. |
| Researcher Affiliation | Academia | Yulei Niu Nanyang Technological University yn.yuleiniu@gmail.com Hanwang Zhang Nanyang Technological University hanwangzhang@ntu.edu.sg |
| Pseudocode | No | The paper describes the approach conceptually and with equations but does not include structured pseudocode or an algorithm block. |
| Open Source Code | Yes | 2Code are available at https://github.com/yuleiniu/introd. |
| Open Datasets | Yes | We conducted experiments on the benchmark datasets VQA v2 [16] and VQA-CP v2 [2]. ... We conducted experiments on the reading comprehension benchmark dataset SQu AD [27]. |
| Dataset Splits | Yes | For the ID setting, we reported the results on VQA v2 val set. For the VQA-CP dataset, we also followed Teney et al. [31] and held out 8k samples from the training set as the val set for ID evaluation. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models) used for running the experiments. |
| Software Dependencies | No | The paper mentions using backbones like Up Dn [4], BERT [12], and XLNet [38] but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | No | The paper discusses the methods and architectures but defers detailed training specifics to the appendix ("More training details are in the appendix.") and does not provide hyperparameter values or other specific system-level training settings in the main text. |