User Modeling with Click Preference and Reading Satisfaction for News Recommendation

Authors: Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on real-world dataset show our method can effectively boost the performance of user modeling for news recommendation.
Researcher Affiliation Collaboration Chuhan Wu1 , Fangzhao Wu2 , Tao Qi1 and Yongfeng Huang1 1Department of Electronic Engineering & BNRist, Tsinghua University, Beijing 100084, China 2Microsoft Research Asia, Beijing 100080, China
Pseudocode No The paper describes the approach textually and through diagrams (Figure 2, Figure 3), but does not contain structured pseudocode or an algorithm block.
Open Source Code Yes 1The source codes of our CPRS method are available at https://github.com/wuch15/IJCAI2020-CPRS.
Open Datasets No Since there is no publicly available news recommendation dataset that contains the full dwell time information of all news logs, we constructed one by collecting the 500,000 news impression logs from Microsoft News2 during one month from 10/12/2019 to 11/13/2019.
Dataset Splits Yes The logs in the last week were used for test, and the rest were used for training. We further randomly sampled 10% of the training data for validation.
Hardware Specification No The paper does not specify the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications).
Software Dependencies No The paper mentions using 'Glove' for word embeddings and 'Adam' as the optimizer, but does not specify their version numbers or other software dependencies with version information.
Experiment Setup Yes In our experiments, word embeddings were 300dimensional, and the personalized read speed embedding was 200-dimensional. Following Wu et al. [2019c], the self-attention networks had 16 heads, and each head was 16-dimensional. The attention query was 200dimensional. The negative sampling ratio N was 4. The intensity of dropout was 20%. The loss coefficient λ was 0.4. Adam [Kingma and Ba, 2015] was used as the optimizer. The batch size was set to 32.