Deep Modeling of Group Preferences for Group-Based Recommendation

Authors: Liang Hu, Jian Cao, Guandong Xu, Longbing Cao, Zhiping Gu, Wei Cao

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, the experiments conducted on a real-world dataset prove the superiority of our deep model over other state-of-the-art methods.
Researcher Affiliation Academia 1Shanghai Jiaotong University, 2University of Technology Sydney, 3Shanghai Technical Institute of Electronics & Information
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks. It describes the model and inference using mathematical equations and textual explanations.
Open Source Code No The paper does not provide any statement about making source code publicly available or a link to a code repository.
Open Datasets Yes CAMRa2011 (Said et al. 2011) released a real-world dataset containing the movie watching records of households and the ratings on each watched movie given by some group members.
Dataset Splits Yes The dataset for track 1 of CAMRa2011 has 290 households with a total of 602 users who gave ratings (on a scale 1~100) over 7,740 movies. This dataset has been partitioned into a training set and an evaluation set. The training set contains 145,069 ratings given by those 602 members, and 114,783 movie choice records from the view of 290 groups.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, memory) used to run the experiments.
Software Dependencies No The paper describes the use of RBMs and DBNs as building blocks but does not specify any software libraries or their version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes In the experiments, we tune the hyper parameters for each model, e.g. the dimensionality of latent features and the regularization parameters, by cross validation. Specially, we set 𝛽= 1 and 𝛼= 0.5 (cf. Eq. (13)) for OCRBM and DLGR when no strategy is used, and we set 𝛼= 1 and 𝑓(𝑔, 𝑖) = 1 [1 + log 𝑠(𝑔, 𝑖)] when a strategy 𝑠( ) is used. Also, we used similar settings for the weights of OCMF.