Learning User Dependencies for Recommendation

Authors: Yong Liu, Peilin Zhao, Xin Liu, Min Wu, Lixin Duan, Xiao-Li Li

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four real datasets have been performed to demonstrate the effectiveness of the proposed PRMF model.
Researcher Affiliation Collaboration Institute for Infocomm Research, A*STAR, Singapore Artificial Intelligence Department, Ant Financial Services Group, China Garena Online, Singapore Big Data Research Center, University of Electronic Science and Technology of China, China
Pseudocode Yes Algorithm 1: PRMF Optimization Algorithm
Open Source Code No The paper does not provide any explicit statements or links to open-source code for the methodology described.
Open Datasets Yes The experiments are performed on four public datasets: Movie Lens-100K, Movie Lens-1M3, Ciao, and Epinions4.
Dataset Splits Yes Cross-validation is adopted to choose the parameters for each evaluated algorithm. The validation data is constructed by randomly chosen 10% of the ratings in the training data.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments.
Software Dependencies No The paper does not list specific software components with their version numbers.
Experiment Setup Yes For matrix factorization methods, the dimensionality of latent space d is set to 10. The latent features of users and items are randomly initialized by a Gaussian distribution with mean 0 and standard deviation 1/d. Moreover, we set the regularization parameters λu = λv and choose the parameters from {10-5, 10-4, ..., 10-1}. For PRMF, α is chosen from {2-5, 2-4, ..., 2-1}, θ is chosen from {2-5, 2-4, ..., 2-1}. For simplicity, we empirically set γ = 10-4, β = 10, and ρ = 100.