UNBERT: User-News Matching BERT for News Recommendation

Authors: Qi Zhang, Jingjie Li, Qinglin Jia, Chuyuan Wang, Jieming Zhu, Zhaowei Wang, Xiuqiang He

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on the Microsoft News Dataset (MIND) demonstrate that our approach consistently outperforms the state-of-the-art methods.
Researcher Affiliation Industry Huawei Noah’s Ark Lab {zhangqi193, lijingjie1, jiaqinglin2, wangchuyuan, jamie.zhu, wangzhaowei3, hexiuqiang1}@huawei.com
Pseudocode No The paper describes the model architecture and components in detail (e.g., Embedding Layer, Word-Level Module, News-Level Module, Click Predictor) and provides a diagram in Figure 3, but it does not include any structured pseudocode or explicitly labeled algorithm blocks.
Open Source Code No The paper does not contain an unambiguous statement that the authors are releasing their code for the work described, nor does it provide a direct link to a source-code repository for their implementation.
Open Datasets Yes We conduct experiments on a real-world news recommendation dataset MIND5 [Wu et al., 2020] collected from MSN News6 logs. 5https://msnews.github.io
Dataset Splits Yes The detailed statistics of the datasets are shown in Table 1. ... MIND-small MIND-large Train Dev Test Train Dev Test ... All the hyper-parameters are tuned on the validation set.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory amounts) used for running the experiments.
Software Dependencies No The paper mentions using 'bert-base-uncased' as the pre-trained model and 'Adam' for optimization, but it does not specify version numbers for these or any other software components or libraries.
Experiment Setup Yes The batch size is set to 128, the learning rate is set to 2e-5, and 2 epochs are trained.