Adaptive Sequence Submodularity

Authors: Marko Mitrovic, Ehsan Kazemi, Moran Feldman, Andreas Krause, Amin Karbasi

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Additionally, to demonstrate the practical utility of our results, we run experiments on Amazon product recommendation and Wikipedia link prediction tasks. We present theoretical guarantees for our approach and we elaborate on the necessity of our novel proof techniques.
Researcher Affiliation Academia Marko Mitrovic Yale University marko.mitrovic@yale.edu Ehsan Kazemi Yale University ehsan.kazemi@yale.edu Moran Feldman University of Haifa moranfe@openu.ac.il Andreas Krause ETH Z urich krausea@ethz.ch Amin Karbasi Yale University amin.karbasi@yale.edu
Pseudocode Yes Algorithm 1 Adaptive Sequence Greedy Policy π
Open Source Code Yes Dataset and code are attached in the supplementary material.
Open Datasets Yes Using the Amazon Video Games review dataset [42], we consider the task of recommending products to users. Using the Wikispeedia dataset [57], we consider users who are surfing through Wikipedia towards some target article.
Dataset Splits No For each user, we use the first g products as training data and try to predict the next k products (where k = 5, 10, 15, 20, 25, 30, and g = 4, 8, 12, 16, 20). If a user has purchased less than g + k products, we simply filter them out. The paper describes how data is partitioned for individual users (first 'g' for training, next 'k' for prediction/testing), but does not specify a global train/validation/test split for the entire dataset or model training. No explicit validation set is mentioned.
Hardware Specification No The paper does not specify any particular hardware used for running the experiments, such as GPU or CPU models.
Software Dependencies No We implement our deep learning baselines in PyTorch [46]. The paper mentions PyTorch but does not provide a specific version number, nor does it list other software dependencies with versions.
Experiment Setup Yes For the Feed Forward Neural Network, we use 4 layers (including input and output) with 256 nodes in each hidden layer. We use Rectified Linear Units (ReLU) as the activation function. We use 10% dropout at each layer. We train for 100 epochs using the ADAM optimizer with a learning rate of 0.001.