End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering

Authors: Devendra Singh, Siva Reddy, Will Hamilton, Chris Dyer, Dani Yogatama

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on three benchmark datasets demonstrate that our proposed method outperforms all existing approaches of comparable size by 2-3 absolute exact match points, achieving new state-of-the-art results. We evaluate our proposed method by experimenting on three commonly used Open QA datasets: Natural Questions, Trivia QA, and Web Questions ( 3).
Researcher Affiliation Collaboration 1Mila Quebec AI Institute 2School of Computer Science, Mc Gill University 3Deep Mind
Pseudocode Yes Algorithm 1: End-to-end training of multi-document reader and retriever. Input: Model parameters and Φ, evidence documents D. while not converged do Compute Ztop-K using the current retriever parameters Φ. // E-step Compute p(a | q, zk) for each zk using the current reader parameters . // E-step Update model parameters and Φ to maximize the log-likelihood in Eq. 6. // M-step end
Open Source Code Yes Our code is available at: https://github.com/Dev Singh Sachan/emdr2
Open Datasets Yes We experiment with three commonly used open-domain question answering datasets: Natural Questions (NQ; Kwiatkowski et al., 2019). Trivia QA (Joshi et al., 2017). Web Questions (Web Q; Berant et al., 2013).
Dataset Splits No During training, we save a checkpoint every 500 steps and select the best checkpoint based on its performance on the development set. The paper mentions using a 'development set' but does not specify the explicit training/validation/test dataset splits (e.g., percentages or sample counts) within the provided text.
Hardware Specification Yes We run all of our experiments on a machine with 96 CPUs, 1.3TB physical memory, and 16 A100 GPUs.
Software Dependencies No We use Py Torch (Paszke et al., 2019) to implement our proposed model and relevant baselines. The paper mentions PyTorch as a library but does not provide a specific version number, nor does it list other software dependencies with their respective versions.
Experiment Setup Yes For both the retriever and reader, we use the base configuration that consists of 12 layers, 768 dimensional hidden size, and 12 attention heads. In all experiments, we retrieve 50 documents, unless stated otherwise... We train the model on these question-answer (masked sentence-named entities) pairs for 82,000 steps with a batch size of 64 using Adam (Kingma and Ba, 2015). We perform training for 10 epochs on NQ and Trivia QA with a batch size of 64, and for 20 epochs on Web Q with a batch size of 16.