Reinforced History Backtracking for Conversational Question Answering
Authors: Minghui Qiu, Xinjing Huang, Cen Chen, Feng Ji, Chen Qu, Wei Wei, Jun Huang, Yin Zhang13718-13726
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on a large conversational question answering dataset show that the proposed method can help to alleviate the problems arising from longer context history. Meanwhile, experiments show that the method yields better performance than other strong baselines, and the actions made by the method are insightful. |
| Researcher Affiliation | Collaboration | Minghui Qiu1, , Xinjing Huang2, , Cen Chen1, , Feng Ji1, Chen Qu1, Wei Wei3, Jun Huang1, Yin Zhang2, 1 Alibaba Group, China 2 Zhejiang University 3 Huazhong University of Science and Technology |
| Pseudocode | Yes | The detailed algorithm is presented in Algorithm 1. |
| Open Source Code | No | The paper mentions using a pre-trained model from github (https://github.com/google-research/bert) but does not provide a link or statement for their own open-source code specific to the methodology described. |
| Open Datasets | Yes | We conduct experiments on the Qu AC dataset2. Qu AC is a machine reading comprehension task with multi-turn interactions... We also evaluate the methods on an additional Canard 3 dataset. 2https://quac.ai/ 3https://sites.google.com/view/qanta/projects/canard |
| Dataset Splits | No | The paper mentions using a "development dataset" for evaluation and states "The training dataset has 11,567 examples" for Env-ST, but does not provide specific split percentages or sample counts for training, validation, and test sets that would enable reproduction of data partitioning. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using the vanilla BERT model and REINFORCE algorithm, but does not provide specific version numbers for software dependencies such as BERT, PyTorch/TensorFlow, Python, or CUDA versions. |
| Experiment Setup | No | The paper describes the model architecture and training scheme but does not provide specific hyperparameters such as learning rate, batch size, number of epochs, or optimizer details required to reproduce the experimental setup. |