Coarse-grain Fine-grain Coattention Network for Multi-evidence Question Answering

Authors: Victor Zhong, Caiming Xiong, Nitish Shirish Keskar, Richard Socher

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On the Qangaroo Wiki Hop multi-evidence question answering task, the CFC obtains a new stateof-the-art result of 70.6% on the blind test set, outperforming the previous best by 3% accuracy despite not using pretrained contextual encoders.
Researcher Affiliation Collaboration 1Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA vzhong@cs.washington.edu 2Salesforce Research, Palo Alto, CA {cxiong, nkeskar, rsocher}@salesforce.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes We evaluate the CFC on two tasks to evaluate its effectiveness. The first task is multi-evidence question answering on the unmasked and masked version of the Wiki Hop dataset (Welbl et al., 2018). The second task is the multi-paragraph extractive question answering task Trivia QA, which we frame as a span reranking task (Joshi et al., 2017).
Dataset Splits Yes We evaluate the accuracy of the model on the development set every epoch, and evaluate the model that obtained the best accuracy on the development set on the held-out test set.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions tools like Stanford Core NLP, GloVe, and Adam, but does not specify version numbers for these or other key software components used for reproducibility.
Experiment Setup Yes For the best-performing model, we train the CFC using Adam (Kingma & Ba, 2015) for a maximum of 50 epochs with a batch size of 80 examples. We use an initial learning rate of 10 3 with (β1, β2) = (0.9, 0.999) and employ a cosine learning rate decay Loshchilov & Hutter (2017) over the maximum budget. We use a embedding size of demb = 400, 300 of which are from GloVe vectors (Pennington et al., 2014) and 100 of which are from character ngram vectors (Hashimoto et al., 2017). The embeddings are fixed and not tuned during training. All GRUs have a hidden size of dhid = 100. We regularize the model using dropout (Srivastava et al., 2014) at several locations in the model: after the embedding layer with a rate of 0.3, encoders with a rate of 0.3, coattention layers with a rate of 0.2, and self-attention layers with a rate of 0.2. We also apply word dropout with a rate of 0.25 (Zhang et al., 2017; Zhong et al., 2018).