SPAN: Understanding a Question with Its Support Answers

Authors: Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments conducted on 100,000 real-world questions from Yahoo! show that SPAN1 performs better than baseline methods.
Researcher Affiliation Academia CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper mentions 'More complementary materials of this research are available in http://pl8787.github.io/qa.html.', but this is not an explicit statement of open-sourcing the code for the described method, nor is the link a direct code repository.
Open Datasets Yes We conduct experiments on Yahoo! Answers dataset to evaluate SPAN. The data set contains 142,627 questions and their candidate answers. The remaining 123,032 questions are splitted into training, validation, and testing set, which contains 98,426, 12,303, and 12,303 questions, respectively.
Dataset Splits Yes The remaining 123,032 questions are splitted into training, validation, and testing set, which contains 98,426, 12,303, and 12,303 questions, respectively.
Hardware Specification No The paper does not provide any specific details about the hardware used for the experiments.
Software Dependencies No The paper mentions 'Word2Vec' and 'CSM' but does not provide specific version numbers for these or any other software dependencies used in the experiments.
Experiment Setup Yes The parameters of BM25 are set to be k1=0.3 and b=0.05, which are tuned by grid search on validation set. For SPAN, λ1 and λ2i are set to be equal in our experiments, with other parameters learned automatically.