Sequential Recommendation with Relation-Aware Kernelized Self-Attention

Authors: Mingi Ji, Weonyoung Joo, Kyungwoo Song, Yoon-Yeong Kim, Il-Chul Moon4304-4311

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimented RKSA over the benchmark datasets, and RKSA shows significant improvements compared to the recent baseline models.
Researcher Affiliation Academia Mingi Ji, Weonyoung Joo, Kyungwoo Song, Yoon-Yeong Kim, Il-Chul Moon Korea Advenced Institute of Science and Technology (KAIST), Korea {qwertgfdcvb, es345, gtshs2, yoonyeong.kim, icmoon}@kaist.ac.kr
Pseudocode No The paper describes its methodology in detail but does not include pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper mentions using 'official codes written by the corresponding authors' for baseline models but does not state that the code for RKSA, the proposed method, is publicly available.
Open Datasets Yes We evaluate our model on five real world datasets: Amazon (Beauty, Games) (He and Mc Auley 2016; Mc Auley et al. 2015), Cite ULike, Steam, and Movie Lens. We follow the same preprocessing procedure on Beauty, Games, and Steam from (Kang and Mc Auley 2018). For preprocessing Cite ULike and Movie Lens, we follow the preprocessing procedure from (Song et al. 2019).
Dataset Splits Yes We split all datasets for training, validation, and testing following the procedure of (Kang and Mc Auley 2018).
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions 'Adam' as the optimizer but does not specify other software dependencies like programming languages (e.g., Python), deep learning frameworks (e.g., PyTorch, TensorFlow), or specific library versions.
Experiment Setup Yes For fair comparisons, we apply the same setting of the batch size (128), the item embedding (64), the dropout rate (0.5), the learning rate (0.001), and the optimizer (Adam). We use the same setting of authors for other hyperparameters. For RKSA, we set the cooccurrence loss weight λr as 0.001. Furthermore, we use the learning rate decay and the early stopping based on the validation accuracy for all methods. We use the latest 50 actions of sequence for all datasets.