Submodular Maximization via Gradient Ascent: The Case of Deep Submodular Functions

Authors: Wenruo Bai, William Stafford Noble, Jeff A. Bilmes

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform computational experiments that support our theoretical results. In this section, we perform a number of synthetic dataset experiments in order to demonstrate proof of concept and also to offer empirical evidence supporting our bounds above.
Researcher Affiliation Academia Wenruo Bai , William S Noble $, Jeff A. Bilmes $ Depts. of Electrical & Computer Engineering , Computer Science and Engineering$, and Genome Sciences Seattle, WA 98195 {wrbai,wnoble,bilmes}@uw.edu
Pseudocode Yes Algorithm 1: Projected Gradient Ascent [4]
Open Source Code No The paper does not provide explicit statements or links for the open-sourcing of their own implementation code for the described methodology. It only references third-party toolkits like PyTorch and TensorFlow.
Open Datasets No In this section, we perform a number of synthetic dataset experiments in order to demonstrate proof of concept and also to offer empirical evidence supporting our bounds above." The paper describes using "synthetic dataset experiments" but provides no concrete access information (link, DOI, formal citation) for these datasets.
Dataset Splits No The paper does not provide specific details on train/validation/test dataset splits for reproducibility.
Hardware Specification No The paper mentions that experiments were run 'on a single CPU' and discusses potential speedups on 'parallel GPU machines', but it does not provide specific hardware details such as CPU/GPU models, memory, or clock speeds.
Software Dependencies No The paper states that algorithms were 'implemented in Python' and references 'Py Torch [33] and Tensor Flow [1]' as potential toolkits, but it does not specify any version numbers for Python or other software dependencies.
Experiment Setup No The paper describes the structure of the synthetic DSF and the matroid constraint but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, number of iterations) for the Projected Gradient Ascent algorithm described.