LANCER: A Lifetime-Aware News Recommender System

Authors: Hong-Kyun Bae, Jeewon Ahn, Dongwon Lee, Sang-Wook Kim

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using real-world news datasets (e.g., Adressa and MIND), we successfully demonstrate that state-of-the-art news recommendation models can get significantly benefited by integrating the notion of lifetime and LANCER, by up to about 40% increases in recommendation accuracy.
Researcher Affiliation Academia Hong-Kyun Bae1, Jeewon Ahn1, Dongwon Lee2, Sang-Wook Kim*1 1 Department of Computer Science, Hanyang University, South Korea 2 College of Information Sciences and Technology, The Pennsylvania State University, USA
Pseudocode No The paper describes the proposed approach and its components using natural language and mathematical equations, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about the release of source code for the described methodology, nor does it provide any links to a code repository.
Open Datasets Yes We conduct experiments on two popular realworld datasets: MIND (Wu et al. 2020) and Adressa (Gulla et al. 2017) as shown in Table 1.
Dataset Splits No For MIND, we randomly sampled 200K users click logs and then divide the training and test sets, following the previous studies (Wu et al. 2019a; Qi et al. 2021b; Wu, Wu, and Huang 2021) which adopted MIND for their evaluation (i.e., 6 days and 1 day for the training and test sets, respectively). For Adressa which contains the click logs from a total of 5 weeks, we used the 4th and 5th weeks as training and test sets, respectively.
Hardware Specification No The paper does not provide any specific details regarding the hardware specifications (e.g., GPU/CPU models, memory) used for conducting the experiments.
Software Dependencies No The paper mentions using existing DL-based models like NRMS, LSTUR, NAML, CNE-SUE, Attention Network, CNN, or LSTM, but it does not specify any version numbers for these or other software dependencies.
Experiment Setup Yes While training news recommendation models, we employed 8 as the value of K in Eq. 4. Then, to evaluate the accuracy of news recommendation, we constructed test sets to have 20 negative news for a user s single positive news during the test period...Here, we set α to the value showing the best accuracy of recommendation for each model, respectively...In Figure 8, where x-axis denotes α ( 10) and y-axis indicates the accuracy from the corresponding metrics. Regardless of the metrics, the results with α=0.4, α=0.1, and α=0.2 show the best performances for NRMS, LSTUR, and NAML, respectively.