Structure Aware Incremental Learning with Personalized Imitation Weights for Recommender Systems

Authors: Yuening Wang, Yingxue Zhang, Antonios Valkanas, Ruiming Tang, Chen Ma, Jianye Hao, Mark Coates

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of learning imitation weights via a comparison on five diverse datasets for three state-of-art structure distillation based recommender systems. The performance shows consistent improvement over competitive incremental learning techniques. and EXPERIMENTS Datasets We use a diverse set of datasets consisting of real-world user-item interactions. As shown in Table 1, the datasets vary in the number of edges and number of user and item nodes by up to two orders of magnitude, demonstrating our approach s scalability.
Researcher Affiliation Collaboration 1 Huawei Noah s Ark Lab 2 Mc Gill University 3 City University of Hong Kong 4 Tianjin University yuening.wang@huawei.com, yingxue.zhang@huawei.com, antonios.valkanas@mail.mcgill.ca, tangruiming@huawei.com, chenma@cityu.edu.hk, haojianye@huawei.com, mark.coates@mcgill.ca
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about providing open-source code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes The 5 mainstream, publicly available datasets we use are: Gowalla, Yelp, Taobao14, Taobao15 and Netflix.
Dataset Splits No The paper describes testing on subsequent incremental blocks ('tested on block t+1') but does not provide specific training, validation, and test dataset splits (e.g., percentages or sample counts) for the overall datasets used.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications, or cloud computing instance types) used for running its experiments.
Software Dependencies No The paper does not list specific version numbers for any key software components, libraries, or solvers used in the experiments.
Experiment Setup No The paper mentions balancing coefficients λ1 and λ2 in the overall training objective but does not provide specific numerical values for these or other hyperparameters such as learning rate, batch size, number of epochs, or optimizer settings.