Community-Based Question Answering via Asymmetric Multi-Faceted Ranking Network Learning

Authors: Zhou Zhao, Hanqing Lu, Vincent Zheng, Deng Cai, Xiaofei He, Yueting Zhuang

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The extensive experiments on a large-scale dataset from a real world CQA site show that our method achieves better performance than other state-of-the-art solutions to the problem. We evaluate the performance of our method using the Quora dataset in (Zhao et al. 2015), which is obtained from a popular question answering site, Quora.
Researcher Affiliation Collaboration Zhou Zhao,1 Hanqing Lu,1 Vincent W. Zheng,2 Deng Cai,3 Xiaofei He,3 Yueting Zhuang1 1College of Computer Science, Zhejiang University 2Advanced Digital Sciences Center, Singapore 3State Key Lab of CAD&CG, Zhejiang University {zhaozhou, lhq110, yzhuang}@zju.edu.cn, vincent.zheng@adsc.com.sg, {dengcai, xiaofeihe}@gmail.com
Pseudocode No No explicit pseudocode or algorithm blocks were found.
Open Source Code No The paper mentions using code for other methods but does not provide concrete access to the source code for their own proposed method (AMRNL).
Open Datasets Yes We evaluate the performance of our method using the Quora dataset in (Zhao et al. 2015), which is obtained from a popular question answering site, Quora.
Dataset Splits Yes We use the first 60%, 70% and 80% posted questions as training set, other 10% for validation and the remaining 10% for testing.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory specifications) were mentioned for running experiments.
Software Dependencies No No specific software dependencies with version numbers (e.g., library names with versions) were mentioned.
Experiment Setup No While parameters like embedding dimension and λ are varied, specific training hyperparameters such as learning rate, batch size, or number of epochs are not explicitly stated in the main text.