Leaping through Time with Gradient-Based Adaptation for Recommendation
Authors: Nuttapong Chairatanakul, Hoang NT, Xin Liu, Tsuyoshi Murata6141-6149
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results show that Leap Rec consistently outperforms the state-of-the-art methods on several datasets and recommendation metrics. Furthermore, we provide an empirical study of the interaction between GTL and OTL, showing the effects of longand short-term modeling. |
| Researcher Affiliation | Academia | Nuttapong Chairatanakul1,2, Hoang NT1, Xin Liu3,2,4, Tsuyoshi Murata1,2 1 Tokyo Institute of Technology, Tokyo, Japan 2 RWBC-OIL, AIST, Tokyo, Japan 3 AIRC, AIST, Tokyo, Japan 4 Digi ARC, AIST, Tokyo, Japan nuttapong.c@net.c.titech.ac.jp, hoangnt@net.c.titech.ac.jp, xin.liu@aist.go.jp, murata@c.titech.ac.jp |
| Pseudocode | Yes | Algorithm 1: Meta-optimization for γ and ω using Leap |
| Open Source Code | No | The paper does not contain any explicit statement or link indicating that the source code for the proposed methodology is publicly available. |
| Open Datasets | Yes | We used three publicly available datasets: Amazon (Ni, Li, and Mc Auley 2019), Goodreads (Wang and Caverlee 2019), and Yelp2. User-item interactions in the Amazon dataset are product reviews. Our experiments used the preprocessed Amazon data by Wang et al. (2020), where data across categories are mixed. The Goodreads dataset is mined from the book-reading social network Goodreads. User-item interactions here are ratings and reviews of books. Yelp is a crowd-sourced business reviewing website. We used the latest officially provided data, updated on February 2021. 2https://www.yelp.com/dataset |
| Dataset Splits | Yes | We divided each dataset into training, validation, and test sets based on the timestamps of interactions. We kept the interactions within six months after the cutting time as the validation set, and the rest after that as the test set. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam' as an optimizer and 'GCN' for graph neural networks, but it does not specify version numbers for any software dependencies or libraries (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For all baselines, we conducted hyperparameter search on the dimensionality of embedding: {64, 128}, learning rate: {0.001, 0.0001}, and dropout: {0, 0.2, 0.5}. We also run hyperparameter search on the number of layers L: {1, 2, 3, 4} for GNN and {1, 2} for SA. We used Adam (Kingma and Ba 2014) as the optimizer for all models. For Leap Rec, we used the same hyperparameter settings as SASRec for its SA, and for its GNN, we adopt two-layer GCN (Kipf and Welling 2016) with dropout rate set at 0.2. We performed grid search on meta learning rates, the number of update steps K, and the time granularity on Amazon, then used them on other datasets3. 3After the grid search, we set learning rates: β = η = 0.01. For the best performance to answer RQ1, we set K to 40 and time granularity to one month, whereas for faster training to answer RQ2-4, we set K to 20 and the time granularity to two months. |