Interpretable AMR-Based Question Decomposition for Multi-hop Question Answering
Authors: Zhenyun Deng, Yonghua Zhu, Yang Chen, Michael Witbrock, Patricia Riddle
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on Hotpot QA demonstrate that our approach is competitive for interpretable reasoning and that the sub-questions generated by QDAMR are well-formed, outperforming existing question-decomposition-based multihop QA approaches. |
| Researcher Affiliation | Academia | School of Computer Science, University of Auckland, New Zealand {zden658, yzhu970}@aucklanduni.ac.nz, {yang.chen, m.witbrock, p.riddle}@auckland.ac.nz |
| Pseudocode | Yes | Algorithm 1 : Question Decomposition Based on AMR |
| Open Source Code | No | The paper does not provide a specific link or explicit statement about the release of its source code. |
| Open Datasets | Yes | Given ordered sub-questions, we train a single-hop QA on SQUAD [Rajpurkar et al., 2016] and on the new single-hop QA dataset. The dataset consists of single-hop QA pairs constructed by [Pan et al., 2020] on Hotpot QA. |
| Dataset Splits | Yes | Table 1: Results for QD-based multi-hop QA models on the dev set of Hotpot QA. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software like BART, T5, Spring, RoBERTa, but does not provide specific version numbers for these or other ancillary software components. |
| Experiment Setup | No | The paper describes the general experimental process but does not provide specific hyperparameters (e.g., learning rate, batch size) or detailed system-level training settings in the main text. |