A Framework for Recommending Relevant and Diverse Items
Authors: Chaofeng Sha, Xiaowei Wu, Junyu Niu
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on Movie Lens dataset demonstrate that our approach outperforms state-of-the-art techniques in terms of both precision and diversity. |
| Researcher Affiliation | Academia | Chaofeng Sha, Xiaowei Wu, Junyu Niu School of Computer Science, Fudan University Shanghai Key Laboratory of Intelligent Information Processing {cfsha,14212010020,jyniu}@fudan.edu.cn |
| Pseudocode | Yes | Algorithm 1 Greedy Search for Modular-Max Sum Dispersion; Algorithm 2 Greedy Search for Submodular Function Maximization; Algorithm 3 Greedy Search for Submodular-Max Sum Dispersion |
| Open Source Code | No | The paper does not provide any explicit statements about open-source code availability or links to code repositories. |
| Open Datasets | Yes | The experiments are carried out on publicly available rating datasets, Movie Lens dataset. It consists of 1,000,209 ratings for 3952 movies by 6040 users of homonym online movie recommender service. |
| Dataset Splits | No | The paper states it splits the dataset into a 'training dataset YT and a test dataset YP by randomly assigning 50% of tuples', but it does not mention a distinct validation set or the specific percentages for all three splits (train, validation, test). |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using a 'PMF model' and parameters like 'learning rate', 'regularization parameter', and 'momentum', but it does not list any specific software or library names with version numbers. |
| Experiment Setup | Yes | For the parameter setting, we set the dimensionality of the latent space D = 80/100/120 when training the PMF model. Both the baselines and our approach take the same settings, and we choose to use a learning rate of 20, regularization parameter of 0.1, and a momentum of 0.3. In addition, for PMF+ER , we set λ = 1 which is the best result compared to other settings. While for our approach, we set = 0.3 and β = 1.5. |