Neural News Recommendation with Attentive Multi-View Learning
Authors: Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, Xing Xie
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on a real-world dataset show our approach can effectively improve the performance of news recommendation. |
| Researcher Affiliation | Collaboration | 1Department of Electronic Engineering, Tsinghua University, Beijing 100084, China 2Microsoft Research Asia, Beijing 100080, China 3University of Science and Technology of China, Hefei 230026, China 4Peking University, Beijing 100871, China |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., specific repository link, explicit code release statement) for the source code of the described methodology. |
| Open Datasets | No | Our experiments were conducted on a real-world dataset, which was constructed by randomly sampling user logs from MSN News3 in one month, i.e., from December 13, 2018 to January 12, 2019. (Footnote 3: https://www.msn.com/en-us/news). This is an internally constructed dataset from private user logs, and no public access or download link for this specific dataset is provided. |
| Dataset Splits | Yes | We use the logs in the last week as the test set, and the rest logs are used for training. In addition, we randomly sampled 10% of training samples for validation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'pre-trained Glove embedding', 'Adam' as optimization algorithm, and 'dropout', but does not provide specific version numbers for these or other software components. |
| Experiment Setup | Yes | In our experiments, the dimensions of word embeddings and category embeddings were set to 300 and 100 respectively. ... The number of CNN filters was set to 400, and the window size was 3. The dimension of the dense layers in the category view was set to 400. The sizes of attention queries was set to 200. The negative sampling ratio K was set to 4. We applied 20% dropout [Srivastava et al., 2014] to each layer in our approach to mitigate overfitting. Adam [Kingma and Ba, 2014] was used as the optimization algorithm. The batch size was set to 100. |