What to Do Next: Modeling User Behaviors by Time-LSTM

Authors: Yu Zhu, Hao Li, Yikang Liao, Beidou Wang, Ziyu Guan, Haifeng Liu, Deng Cai

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on two real-world datasets show the superiority of the recommendation method using Time LSTM over the traditional methods.
Researcher Affiliation Academia State Key Lab of CAD&CG, College of Computer Science, Zhejiang University, China College of Information and Technology, Northwest University of China College of Computer Science, Zhejiang University, China School of Computing Science, Simon Fraser University, Canada
Pseudocode No The paper provides mathematical equations describing the model (Eq. 1-22) and architectural diagrams, but it does not include a distinct pseudocode block or algorithm section.
Open Source Code Yes Our code is publicly available4. [Footnote 4: https://github.com/Darry O/time_lstm]
Open Datasets Yes Our proposed algorithm is evaluated on two datasets, Last FM1 and Cite ULike2. [Footnote 1: http://www.dtic.upf.edu/~ocelma/Music Recommendation Dataset/lastfm-1K.html] [Footnote 2: http://www.citeulike.org/faq/data.adp]
Dataset Splits No The paper states, 'For each dataset, 80% users are randomly selected as training users and their tuples are used for training. The remaining users are test users.' This specifies a train/test split but does not explicitly mention or detail a validation dataset split.
Hardware Specification Yes The training time is evaluated on a Ge Force GTX Titan Black GPU.
Software Dependencies No The paper mentions 'publicly available python implementation' for a baseline method but does not specify exact version numbers for Python or any other key software libraries or dependencies used in their own implementation, which is required for reproducibility.
Experiment Setup Yes The number of units is set to 512 for LSTM and its variants. The other hyperparameters in all methods are tuned via cross-validation or set as in the original paper.