Match-SRNN: Modeling the Recursive Matching Structure with Spatial RNN

Authors: Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, Xueqi Cheng

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted experiments on question answering and paper citation tasks to evaluate the effectiveness of our model. The experimental results showed that Match-SRNN can significantly outperform existing deep models. The experimental results are listed in Table 1.
Researcher Affiliation Academia 1{CAS Key Lab of Network Data Science and Technology Institute of Computing Technology, Chinese Academy of Sciences, China} 2{University of Chinese Academy of Sciences, China}
Pseudocode No The paper describes the model architecture and computational steps using text and mathematical equations, but it does not include a clearly labeled pseudocode or algorithm block.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the Match-SRNN methodology is publicly available.
Open Datasets Yes QA dataset is collected from Yahoo! Answers, a community question answering system where some users propose questions to the system and other users will submit their answers, as in [Wan et al., 2016]. PC task is to match two papers with citation relationship. The dataset is constructed as in [Pang et al., 2016].
Dataset Splits No The paper states that parameters were selected based on performance on a validation set, but it does not provide specific details about the size or proportion of this validation split for the main experiments. For the simulation experiment, only training and testing sets are mentioned, not a validation set.
Hardware Specification No The paper does not provide any specific hardware details such as exact GPU/CPU models, processor types, or memory used for running the experiments.
Software Dependencies No The paper mentions software like Word2Vec, Ada Grad, and Lucene search engine, but it does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes The batch size of SGD is set to 128. The dimension of neural tensor network and spatial RNN is set to 10, because it won the best validation results among the settings of d = 1, 2, 5, 10, and 20. The initial learning rates of Ada Grad are also selected by validation.