A Synthetic Approach for Recommendation: Combining Ratings, Social Relations, and Reviews

Authors: Guang-Neng Hu, Xin-Yu Dai, Yunya Song, Shu-Jian Huang, Jia-Jun Chen

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our models on two datasets: Epinions and Ciao. ... The metric root-mean-square error (RMSE) for rating prediction task is defined as ... We use grid search to determine hyperparameters and report the best RMSE on the testing set over 50 passes...
Researcher Affiliation Academia Guang-Neng Hu1, Xin-Yu Dai1, Yunya Song2, Shu-Jian Huang1, Jia-Jun Chen1 1National Key Laboratory for Novel Software Technology; Nanjing University, Nanjing 210023, China 2Department of Journalism; Hong Kong Baptist University, Hong Kong hugn@nlp.nju.edu.cn, {daixinyu,huangsj,chenjj}@nju.edu.cn, yunyasong@hkbu.edu.hk
Pseudocode No The paper describes the learning procedure but does not provide a formally structured pseudocode or algorithm block.
Open Source Code No The paper only provides links to source code for baseline models (PMF and HFT) that they used, not for their own proposed MR3 framework. There is no explicit statement about making their own code publicly available.
Open Datasets Yes We evaluate our models on two datasets: Epinions and Ciao.2 ... 2http://www.public.asu.edu/ jtang20/
Dataset Splits No The paper states, 'We randomly select x% as the training set and report the prediction performance on the remaining 1 x% testing set,' which describes only training and testing splits, with no explicit mention of a separate validation dataset split percentage or samples.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU specifications, or memory.
Software Dependencies No The paper mentions using source code for PMF and HFT baselines but does not provide specific version numbers for any software dependencies or libraries used in their implementation.
Experiment Setup Yes For both e SMF and LOCABAL, the number of latent factors F = 10, norm penalty λ = 0.5, learning rate = 0.0007, momentum = 0.8, and λrel = 0.1. For all methods, we set the number of latent factors F = 10, norm penalty λ = 0.5, learning rate = 0.0007, momentum = 0.8. For HFT, λrev = 0.1; for MR3, λrel = 0.001 and λrev = 0.05.