Low-Rank Linear Cold-Start Recommendation from Social Data
Authors: Suvash Sedhain, Aditya Menon, Scott Sanner, Lexing Xie, Darius Braziunas
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on four realworld datasets show that Lo Co yields significant improvements over state-of-the-art cold-start recommenders that exploit high-dimensional social network metadata. |
| Researcher Affiliation | Collaboration | Suvash Sedhain, Aditya Krishna Menon, Scott Sanner, Lexing Xie, Darius Braziunas S { Australian National University, Data61}, Canberra, ACT, Australia University of Toronto, Toronto, Canada S Rakuten Kobo Inc., Toronto, Canada |
| Pseudocode | No | The paper describes methods mathematically but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We create 10 temporal train-test splits for the Ebook dataset. We create 10 train-test folds on the other datasets by including random 10% of the users in the test set and remaining 90% users in the training set. |
| Dataset Splits | Yes | We used cross-validation with grid-search to tune all hyperparameters. |
| Hardware Specification | No | The paper does not provide specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper describes algorithms used (e.g., randomized SVD) but does not provide specific ancillary software details like library names with version numbers. |
| Experiment Setup | Yes | We used cross-validation with grid-search to tune all hyperparameters. For the latent factor methods, we tuned the latent dimension K from {5, 10, 50, 100, 500, 1000}. For the methods relying on ℓ2 regularisation, we tuned all regularisation strengths from {10-3, 10-2, ..., 103}. |