Deep Models of Interactions Across Sets

Authors: Jason Hartford, Devon Graham, Kevin Leyton-Brown, Siamak Ravanbakhsh

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In experiments, our models achieved surprisingly good generalization performance on this matrix extrapolation task, both within domains (e.g., new users and new movies drawn from the same distribution used for training) and even across domains (e.g., predicting music ratings after training on movies).
Researcher Affiliation Academia Jason Hartford * 1 Devon R Graham * 1 Kevin Leyton-Brown 1 Siamak Ravanbakhsh 1 1Department of Computer Science, University of British Columbia, Canada.
Pseudocode No The paper describes the proposed layer and architecture using mathematical equations but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes For reproducibility we have released Tensorflow and Pytorch implementations of our model.2 ... 2Tensorflow: https://github.com/mravanba/ deep_exchangeable_tensors. Pytorch: https: //github.com/jhartford/Auto Enc Sets.
Open Datasets Yes The datasets used in our experiments are summarized in Table 1. ... Movie Lens data sets are standard (Harper & Konstan, 2015), as is Netflix, while for Flixster, Douban and Yahoo Music we used the 3000 3000 submatrix presented by (Monti et al., 2017) for comparison purposes.
Dataset Splits No Comparison of RMSE scores for the Movie Lens-100k dataset, based on the canonical 80/20 training/test split.Comparison of RMSE scores for the Movie Lens-1M dataset on random 90/10 training/test split.The paper specifies training and test splits but does not explicitly mention a separate validation dataset split.
Hardware Specification No The paper mentions “GPU memory” and thanks “West Grid and Compute Canada” in the acknowledgment, but does not provide specific hardware details such as GPU models, CPU models, or exact memory amounts used for the experiments.
Software Dependencies No The paper states that implementations are available in “TensorFlow and Pytorch”, but it does not specify the version numbers for these or any other software dependencies.
Experiment Setup Yes Details on the training and architectures appear in the appendix.