DAN: Deep Attention Neural Network for News Recommendation
Authors: Qiannan Zhu, Xiaofei Zhou, Zeliang Song, Jianlong Tan, Li Guo5973-5980
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiment on real-world news data sets, and the experimental results demonstrate the superiority and effectiveness of our proposed DAN model. |
| Researcher Affiliation | Academia | Qiannan Zhu,1,2 Xiaofei Zhou, 1,2 Zeliang Song,1,2 Jianlong Tan,1,2 Li Guo1,2 1Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China 2School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We conduct experiments on a real-world online news dataset Adressa2 (Gulla et al. 2017) to evaluate the performance of our model. Adressa is published with the collaboration of Norwegian University of Science and Technology (NTNU) and Adressavisen (local newspaper in Trondheim, Norway) as a part of Rec Tech project on recommendation technology. 2http://reclab.idi.ntnu.no/dataset/ |
| Dataset Splits | No | The paper mentions using a "validation set" (e.g., "The best parameters are determined by validation set") and discusses training and testing, but it does not provide specific dataset split information (exact percentages, sample counts, or citations to predefined splits) needed to reproduce the data partitioning for training, validation, and testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running its experiments. |
| Software Dependencies | No | The paper mentions "We implement our model based on Tensorflow" but does not provide specific version numbers for TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | Parameter Settings. We implement our model based on Tensorflow. In our model, we select the dimension of entity embeddings d and entity type k from {20, 50, 100, 200, 300}. For each CNN in PCNN component, we select the size of filter [c1, c2] from all combinations in set {o, o/5} and {d, d/5}, the filter stride [s1, s2] from all combinations in set {2, 3, 4}, and the number of filter v from {4, 8, 16, 32, 64}, batch size B from {64, 128, 256, 512, 1024}. The best parameters are determined by validation set. To compare our model with baselines, we use F1 and AUC value as the evaluation metrics. For our model, the best optimal parameter configurations are d = 100, k = 100, B = 256, v = 8. In PCNN, for CNN with title embedding as input , o = 10, [c1, c2] = [2, 50], [s1, s2] = [2, 2], for CNN with profile embedding as input, o = 80, [c1, c2] = [40, 50], [s1, s2] = [2, 2]. |