A Batch Learning Framework for Scalable Personalized Ranking
Authors: Kuan Liu, Prem Natarajan
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct empirical evaluations on three item recommendation tasks, and our method shows a consistent accuracy improvement over current state-of-the-art methods. |
| Researcher Affiliation | Academia | Kuan Liu, Prem Natarajan Information Sciences Institute & Department of Computer Science University of Southern California |
| Pseudocode | Yes | Algorithm 1: Input: Training data S; mini-batch size m; Sample rate q; a learning rate η. Output: The model parameters f. initialize parameters of model f randomly; while Objective (9) is not converged do sample a mini-batch of observations {(x, y)i}m i=1; sample item subset Z from Y, q = |Z|/|Y|; compute approximated ranks by (8); update model f parameters: f = f η ℓ/ f based on (10); end |
| Open Source Code | No | BPR and WARP are implemented by LIGHTFM (Kula 2015). We implemented the other algorithms. |
| Open Datasets | Yes | Movie Lens-20m The dataset has anonymous ratings made by Movie Lens users.2 We transform the data into binary indicating whether a user rated a movie above 4.0. We discard users with less than 10 movie ratings and use 70%/30% train/test splitting. Attributes include movie genres and movie title text. 2www.movielens.org |
| Dataset Splits | Yes | Early stopping is used on a development dataset split from training for all models. |
| Hardware Specification | Yes | Batch-based approaches are implemented based on Tensorflow 1.2 on a single GPU (NVIDIA Tesla P100) ). LIGHTFM runs on Cython with a 5-core CPU (Intel Xeon 3.30GHz). |
| Software Dependencies | Yes | Batch-based approaches are implemented based on Tensorflow 1.2 on a single GPU (NVIDIA Tesla P100) ). LIGHTFM runs on Cython with a 5-core CPU (Intel Xeon 3.30GHz). |
| Experiment Setup | Yes | Hyper-parameter model size is tuned in {10, 16, 32, 48, 64}; learning rate is tuned in {0.5, 1, 5, 10}; when applicable, dropout rate is 0.5. |