Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
A Meta-Learning Perspective on Cold-Start Recommendations for Items
Authors: Manasi Vartak, Arvind Thiagarajan, Conrado Miranda, Jeshua Bratman, Hugo Larochelle
NeurIPS 2017 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our techniques on the real-world problem of Tweet recommendation. On production data at Twitter, we demonstrate that our proposed techniques significantly beat the MF baseline and also outperform production models for Tweet recommendation. |
| Researcher Affiliation | Collaboration | Manasi Vartak Massachusetts Institute of Technology EMAIL Arvind Thiagarajan Twitter Inc. EMAIL Conrado Miranda Twitter Inc. EMAIL Jeshua Bratman Twitter Inc. EMAIL Hugo Larochelle Google Brain EMAIL |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | We used production data regarding Tweet engagement to perform an offline evaluation of our techniques. |
| Dataset Splits | Yes | The test and validation sets were similarly constructed, but for different days. For every model, we performed hyperparameter tuning on a validation set using random search and report results for the best performing model. |
| Hardware Specification | No | The paper states "All models were implemented in the Twitter Deep Learning platform [2]" but does not provide specific details on the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions "All models were implemented in the Twitter Deep Learning platform" and "SGD was used for optimization" but does not specify version numbers for any software dependencies or libraries. |
| Experiment Setup | No | The paper mentions that "Models were trained to minimize cross-entropy loss and SGD was used for optimization" and that "For every model, we performed hyperparameter tuning on a validation set using random search", but it does not provide specific hyperparameter values or detailed training configurations in the main text. |