Stochastically Robust Personalized Ranking for LSH Recommendation Retrieval
Authors: Dung D. Le, Hady W. Lauw4594-4601
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on publicly available datasets show that the proposed framework not only effectively learns user s preferences for prediction, but also achieves high compatibility with LSH stochasticity, producing superior post-LSH indexing performances as compared to state-of-the-art baselines. |
| Researcher Affiliation | Academia | Dung D. Le, Hady W. Lauw School of Information Systems, Singapore Management University {ddle, hadywlauw}@smu.edu.sg |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. It only mentions using a third-party LSH implementation. |
| Open Datasets | Yes | We experiment on two large public datasets: Movie Lens 20M1 and Netflix2 (Table 1). By default, Movie Lens 20M includes users with at least 20 ratings. For consistency, we apply the same to Netflix. We randomly keep 60% of the ratings for training and hide 40% for testing in a stratified manner. We report the average results over five such random training/testing splits. |
| Dataset Splits | Yes | We randomly keep 60% of the ratings for training and hide 40% for testing in a stratified manner. We report the average results over five such random training/testing splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'the implementation in (Aly, Munich, and Perona 2011)' for LSH but does not provide specific software names with version numbers for other dependencies. |
| Experiment Setup | Yes | We tune the hyper-parameters of all models for the best performances. For IPMF, we adopt the setting used for Netflix dataset. For the ranking-based methods (BPR, COE, IBPR and SRPR), the learning rate and the regularization are 0.05 and 0.001 respectively. For CFEE, these values are 0.1 and 0.001 respectively. All LSH-indexed models use d = 20 dimensionalities in their latent representations. Similar trends are observed with other dimensionalities. |