Influential Exemplar Replay for Incremental Learning in Recommender Systems

Authors: Xinni Zhang, Yankai Chen, Chenhao Ma, Yixiang Fang, Irwin King

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four prototypical backbone models, two classic recommendation tasks, and four widely used benchmarks consistently demonstrate the effectiveness of our method as well as its compatibility for extending to several incremental recommender models.
Researcher Affiliation Academia Xinni Zhang1, Yankai Chen1, Chenhao Ma2, Yixiang Fang2, Irwin King1 1The Chinese University of Hong Kong 2The Chinese University of Hong Kong, Shenzhen
Pseudocode Yes Algorithm 1: Working Procedure of INFERONCE
Open Source Code No The paper does not provide an explicit statement about the release of source code for the described methodology, nor does it include a direct link to a code repository.
Open Datasets Yes We incorporate four real-world benchmarks that vary in size, domain, sparsity, and duration, as reported in Table 1. Each dataset is partitioned chronologically into a 50% base segment and five consecutive 10% incremental segments. (Table 1 lists: Lastfm-2k, TB2014, Gowalla, Foursquare)
Dataset Splits Yes The base segment is randomly divided into training, validation, and testing sets in a 6:2:2 ratio.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., programming language versions, library versions, or solver versions).
Experiment Setup Yes Here η denotes the learning rate. Which significantly reduces the computation cost compared to Eqn. (7) whilst providing flexibility of tuning approximation rate with hyper-parameter λ. Lastly, we vary the reservoir size by adjusting the reply ratio, i.e., K/|D|, to investigate the INFERONCE performance. For stable reproducibility, we conduct five-fold cross validation. The holistic runtime costs of all models on the largest dataset Foursquare (including reservoir construction, incremental model training with early-stop, and evaluation)