Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Session-based recommendations with recurrent neural networks
Authors: Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk
ICLR 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two data-sets show marked improvements over widely used approaches. |
| Researcher Affiliation | Industry | Bal azs Hidasi Gravity R&D Inc. Budapest, Hungary EMAIL Alexandros Karatzoglou Telefonica Research Barcelona, Spain EMAIL Linas Baltrunas Netflix Los Gatos, CA, USA EMAIL Domonkos Tikk Gravity R&D Inc. Budapest, Hungary EMAIL |
| Pseudocode | No | The paper describes the model mathematically and textually, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | The first dataset is that of Rec Sys Challenge 20151. This dataset contains click-streams of an e-commerce site that sometimes end in purchase events. We work with the training set of the challenge and keep only the click events. We filter out sessions of length 1. The network is trained on 6 months of data, containing 7,966,257 sessions of 31,637,239 clicks on 37,483 items. |
| Dataset Splits | Yes | The optimization was done on a separate validation set. Then the networks were retrained on the training plus the validation set and evaluated on the final test set. |
| Hardware Specification | No | The paper mentions training on a 'GPU' and 'CPU' but does not specify any particular models (e.g., NVIDIA A100, Intel Xeon) or other hardware details. |
| Software Dependencies | No | The paper mentions 'Theano' in a footnote but does not provide a specific version number, nor other software dependencies with their versions. |
| Experiment Setup | Yes | Table 2: Best parametrizations for datasets/loss functions: Dataset Loss Mini-batch Dropout Learning rate Momentum RSC15 TOP1 50 0.5 0.01 0 RSC15 BPR 50 0.2 0.05 0.2 RSC15 Cross-entropy 500 0 0.01 0 VIDEO TOP1 50 0.4 0.05 0 VIDEO BPR 50 0.3 0.1 0 VIDEO Cross-entropy 200 0.1 0.05 0.3 |