LSTM Networks for Online Cross-Network Recommendations
Authors: Dilruk Perera, Roger Zimmermann
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that the proposed model consistently outperforms state-of-the-art in terms of accuracy, diversity and novelty. |
| Researcher Affiliation | Academia | Dilruk Perera and Roger Zimmermann School of Computing, National University of Singapore dilruk@comp.nus.edu.sg and rogerz@comp.nus.edu.sg |
| Pseudocode | No | The paper describes the model architecture and equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an explicit statement or link indicating that the source code for their proposed methodology is publicly available. It only links to a baseline's implementation. |
| Open Datasets | Yes | We extracted users with Twitter, Google Plus and You Tube interactions from two public datasets [Lim et al., 2015; Yan et al., 2014] |
| Dataset Splits | Yes | Then, from each user, the oldest 70% of target network interactions and source network interactions within the same time period were used as the training set. Similarly, the next 10% was used as the validation set (to tune hyper-parameters), and the latest 20% was used as the test set (held out for predictions). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions using "Adam optimization (ADAM)" but does not provide specific software dependencies or their version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We used a grid search algorithm to set the number of topics (Kt) to 60... We also used a grid search algorithm to set the number of dimensions in the embedding layer (k) to 100, the number of hidden units (h) to 400, and the dropout ratio to 0.35. The learning rate (ยต) was set to a fairly small value (0.001) to obtain the local minimum. |