A Meta-Learning Perspective on Cold-Start Recommendations for Items
Authors: Manasi Vartak, Arvind Thiagarajan, Conrado Miranda, Jeshua Bratman, Hugo Larochelle
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our techniques on the real-world problem of Tweet recommendation. On production data at Twitter, we demonstrate that our proposed techniques significantly beat the MF baseline and also outperform production models for Tweet recommendation. |
| Researcher Affiliation | Collaboration | Manasi Vartak Massachusetts Institute of Technology mvartak@csail.mit.edu Arvind Thiagarajan Twitter Inc. arvindt@twitter.com Conrado Miranda Twitter Inc. cmiranda@twitter.com Jeshua Bratman Twitter Inc. jbratman@twitter.com Hugo Larochelle Google Brain hugolarochelle@google.com |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | We used production data regarding Tweet engagement to perform an offline evaluation of our techniques. |
| Dataset Splits | Yes | The test and validation sets were similarly constructed, but for different days. For every model, we performed hyperparameter tuning on a validation set using random search and report results for the best performing model. |
| Hardware Specification | No | The paper states "All models were implemented in the Twitter Deep Learning platform [2]" but does not provide specific details on the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions "All models were implemented in the Twitter Deep Learning platform" and "SGD was used for optimization" but does not specify version numbers for any software dependencies or libraries. |
| Experiment Setup | No | The paper mentions that "Models were trained to minimize cross-entropy loss and SGD was used for optimization" and that "For every model, we performed hyperparameter tuning on a validation set using random search", but it does not provide specific hyperparameter values or detailed training configurations in the main text. |