DMRAN:A Hierarchical Fine-Grained Attention-Based Network for Recommendation
Authors: Huizhao Wang, Guanfeng Liu, An Liu, Zhixu Li, Kai Zheng
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two real-world datasets illustrate that DMRAN can improve the efficiency and effectiveness of the recommendation compared with the state-of-the-art methods. |
| Researcher Affiliation | Academia | 1Institute of Artificial Intelligence, School of Computer Science and Technology, Soochow University, China 2Department of Computing, Macquarie University, Sydney, NSW, Australia 3School of Computer Science and Engineering, University of Electronic Science and Technology of China |
| Pseudocode | No | The paper describes the model architecture and mathematical equations, but does not provide a separate pseudocode or algorithm block. |
| Open Source Code | No | The paper does not include an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We perform experiments on two real-world datasets. The details of them are shown in Table. 1. The Amazon datasets [Mc Auley et al., 2015] accumulate user behavior log, and we adopt its two subsets: Electronics, and Clothing. |
| Dataset Splits | No | The paper describes how training and test sets are formed from interaction sequences (using k interactions to predict (k+1)th, and n-1 to predict nth for test), but it does not explicitly provide details on a separate validation set split (e.g., percentages or counts) or how it was formed, other than implicitly through 'grid search' for hyperparameters. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library names like PyTorch or TensorFlow with their versions) needed to replicate the experiment. |
| Experiment Setup | Yes | All models are trained with stochastic gradient descent (SGD). The learning rate starts at 1.0. The batch size, L2loss weight, and the size of all hidden layers are set to 32 or 16, 5e-5 or 1e-4, and 128, respectively. For DMRAN, we apply a gird search in {2, 5, 8, 10, 15, 20} for the special hyperparameter r i.e., the number of rows as shown in E.q. 8. |