Social Recommendation with an Essential Preference Space
Authors: Chun-Yi Liu, Chuan Zhou, Jia Wu, Yue Hu, Li Guo
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on four real-world datasets demonstrate the superiority of the proposed SREPS model compared with seven stateof-the-art social recommendation methods. |
| Researcher Affiliation | Academia | 1Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China 2School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 3Department of Computing, Macquarie University, Sydney, NSW 2109, Australia |
| Pseudocode | No | The paper describes the model and optimization steps in detail with mathematical equations and textual explanations, but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Four datasets were used in our experiments: Film Trust (Guo, Zhang, and Yorke-Smith 2013), Flixster (Jamali and Ester 2010), Epinions (Tang, Gao, and Liu 2012) and Ciao (Tang et al. 2012). |
| Dataset Splits | No | For each dataset, 80% of the rating data are selected randomly as the training set and the rest are used as the testing set. The paper specifies training and testing splits but does not explicitly mention a separate validation set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory specifications, or cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper mentions models and algorithms like LINE and Matrix Factorization, but it does not specify any software dependencies with version numbers (e.g., Python version, specific library versions like TensorFlow or PyTorch). |
| Experiment Setup | Yes | The optimal experimental settings for each method were either determined by our experiments or were taken from the suggestions by previous works. The setting that were taken from previous works include: the learning rate η = 0.001; and the dimension of the latent vectors d = 5 and 10. All the regularization parameters for the latent vectors were set to be the same at λU = λV = 0.001. The other parameters are shown in Table 2. We set l = d0 = d1 = d2 for the SREPS model, i.e., the dimensions of the essential preference space and the three semantic latent spaces were the same. The regularization parameter λ was set to be 0.001. The hyper parameters α and β are also shown in Table 2 and were based on the results of the parameter sensitivity analyses. |