Meta-Learning for Online Update of Recommender Systems
Authors: Minseok Kim, Hwanjun Song, Yooju Shin, Dongmin Park, Kijung Shin, Jae-Gil Lee4065-4074
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Theoretical analysis and extensive evaluation on three real-world online recommender datasets validate the effectiveness of Me LON. |
| Researcher Affiliation | Collaboration | 1KAIST, 2NAVER AI Lab, {minseokkim, yooju.shin, dongminpark, kijungs, jaegil}@kaist.ac.kr, hwanjun.song@navercorp.com |
| Pseudocode | Yes | Algorithm 1: Online Training via Me LON |
| Open Source Code | Yes | Our source code is available at https://github.com/kaist-dmlab/Me LON. |
| Open Datasets | Yes | We used three real-world online recommendation benchmark datasets: Adressa (Gulla et al. 2017), Amazon (Ni, Li, and Mc Auley 2019), and Yelp2, as summarized in Table 1. |
| Dataset Splits | Yes | For prequential evaluation on online recommendation scenarios, we follow a commonly-used approach (He et al. 2016); we sort the interactions in the dataset in chronological order, and divide them into three parts offline pre-training data, online validation data, and online test data. Online validation data is exploited to search the hyperparameter setting of the recommenders and update strategies and takes up 10% of test data. |
| Hardware Specification | Yes | Our implementation is written in Py Torch, and the experiments were conducted on Nvidia Titan RTX and Intel i9-9900KS. |
| Software Dependencies | No | The paper mentions "Py Torch" but does not specify a version number for it or any other software dependency. |
| Experiment Setup | Yes | All the experiments are performed with a batch size 256 and trained for 100 epochs. ... For the graph attention in the first component of Me LON, we randomly sample 10 neighbors per target user or item. Besides, for the MLP which learns the parameter roles, the number of hidden layers (L) is set to be 2. To optimize a recommender under the default and sample reweighting strategies, we use Adam (Kingma and Ba 2015) with a learning rate η = 0.001 and a weight decay 0.001. |