RINK: Reader-Inherited Evidence Reranker for Table-and-Text Open Domain Question Answering

Authors: Eunhwan Park, Sung-Min Lee, Dearyong Seo, Seonhoon Kim, Inho Kang, Seung-Hoon Na

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on OTT-QA, a largescale table-and-text open-domain question answering dataset, show that the proposed RINK armed with our pretraining procedure makes improvements over the baseline reranking method and leads to state-of-the-art performance.
Researcher Affiliation Collaboration Eunhwan Park1, Sung-Min Lee1, Daeryong Seo3, Seonhoon Kim2*, Inho Kang3, Seung-Hoon Na1 1 Jeonbuk National University 2 Coupang 3 Naver Corporation
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes Experimental results on OTT-QA, a largescale table-and-text open-domain question answering dataset, show that the proposed RINK armed with our pretraining procedure makes improvements over the baseline reranking method and leads to state-of-the-art performance.
Dataset Splits Yes Table 1 shows the detailed statistics of OTT-QA (Chen et al. 2021)...Train Dataset 41,469 Developement Dataset 2,214 Test Dataset 2,158
Hardware Specification Yes All experiments were conducted using eight NVIDIA Quadro RTX A6000 GPUs.
Software Dependencies No The paper mentions 'Ro BERTa-base' and 'T5-base' models but does not provide specific version numbers for general software dependencies like programming languages or libraries (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes We used a batch size of 16 and learning rates of 5 10 5 and 1 10 4 to train the BERT and T5, respectively. The Adam W optimizer was used for training.