Tree-of-Reasoning Question Decomposition for Complex Question Answering with Large Language Models

Authors: Kun Zhang, Jiali Zeng, Fandong Meng, Yuanzhuo Wang, Shiqi Sun, Long Bai, Huawei Shen, Jie Zhou

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our proposed framework on four benchmark datasets. The experimental results demonstrate that our proposed methods consistently outperform baseline methods outperform strong baselines by a substantial margin across all datasets.
Researcher Affiliation Collaboration 1CAS Key Laboratory of AI Security, Institute of Computing Technology, Chinese Academy of Sciences 2School of Computer Science and Technology, University of Chinese Academy of Sciences 3Pattern Recognition Center, We Chat AI, Tencent Inc, China 4Big Data Academy, Zhongke
Pseudocode Yes Algorithm 1: Processing a Complex Question
Open Source Code No The paper does not explicitly state that the source code for their methodology is publicly available, nor does it provide a link to a code repository for their work.
Open Datasets Yes We select four complex multi-hop question-answering datasets, Hotpot QA(Yang et al. 2018), Hy Brid QA(Chen et al. 2020), Musique(Trivedi et al. 2022), and Wiki Multi Hop QA (WMHQA)(Ho et al. 2020).
Dataset Splits No The paper describes the datasets used (Hotpot QA, Hy Brid QA, Musique, WMHQA) but does not explicitly state the specific training, validation, and test splits (e.g., percentages or sample counts) used for its experiments.
Hardware Specification No The paper mentions using 'gpt-3.5-turbo' via OpenAI's API and 'ColBERTv2' as a retrieval model, but it does not provide any specific details about the hardware (e.g., GPU models, CPU types) used for running their experiments or training their own models.
Software Dependencies No The paper mentions using 'gpt-3.5-turbo' and 'ColBERTv2' as well as 'T5' as the basis model, but it does not specify version numbers for these or any other software dependencies, such as programming languages or deep learning frameworks.
Experiment Setup No The paper describes the models used (gpt-3.5-turbo, ColBERTv2, T5) and data collection for training custom components (RTC, QG), but it does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, number of epochs, optimizer settings) for training their models.