Hierarchical Context Enabled Recurrent Neural Network for Recommendation
Authors: Kyungwoo Song, Mingi Ji, Sungrae Park, Il-Chul Moon4983-4991
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimented the suggested structure on the sequential recommendation tasks with Cite ULike, Movie Lens, and Last FM, and our model showed the best performances in the sequential recommendations. |
| Researcher Affiliation | Collaboration | Kyungwoo Song,1 Mingi Ji,1 Sungrae Park,2 Il-Chul Moon1 1Korea Advanced Institute of Science and Technology (KAIST), Korea 2Clova AI Research, NAVER Corp., Korea |
| Pseudocode | No | The paper provides mathematical formulas and descriptions of the models but does not include explicit pseudocode blocks or algorithms. |
| Open Source Code | Yes | Our HCRNN source code is available at https://github.com/gtshs2/HCRNN. |
| Open Datasets | Yes | For the performance evaluation, we used three publicly available datasets: Cite ULike, Last FM, and Movie Lens1. |
| Dataset Splits | Yes | We performed cross-validation by assigning 10% of the randomly chosen train set as the validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | we set the batch size (512), the item embedding (100), the RNN hidden dimension (100), the input dropout (0.25), the output layer dropout (0.5), the optimizer (Adam), and the learning rate (0.001)2 as shown in NARM (Li et al. 2017). The paper mentions the optimizer "Adam" but does not specify version numbers for any software or libraries. |
| Experiment Setup | Yes | Experiment Settings For fair performance comparisons, we set the batch size (512), the item embedding (100), the RNN hidden dimension (100), the input dropout (0.25), the output layer dropout (0.5), the optimizer (Adam), and the learning rate (0.001)2 as shown in NARM (Li et al. 2017). |