Provable Variational Inference for Constrained Log-Submodular Models

Authors: Josip Djolonga, Stefanie Jegelka, Andreas Krause

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An empirical evaluation of the proposed techniques on several problem instances. We perform numerical experiments to better understand the practical performance of the proposed methods, namely how good is the approximation when compared to the theoretical e/(e 1) factor and how well are the marginals estimated. Moreover, we showcase the scalability of our approach by performing inference on large real-world instances.
Researcher Affiliation Academia Josip Djolonga Dept. of Computer Science ETH Zürich josipd@inf.ethz.ch Stefanie Jegelka CSAIL MIT stefje@csail.mit.edu Andreas Krause Dept. of Computer Science ETH Zürich krausea@ethz.ch
Pseudocode No The paper describes algorithms (e.g., greedy strategy, Kruskal's algorithm) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper states "The implementation was done in Python using Py Torch... We provide all details in the appendix" but does not provide a direct link or explicit statement that the code for their work is open-source or publicly available.
Open Datasets Yes We show our results in Figure 2(b), on n = 1500 points from the CIFAR10 [50] dataset normalized as in [44].
Dataset Splits No The paper mentions using datasets like CIFAR10 and sensor placement data, but does not provide specific details on how the data was split into training, validation, or test sets (e.g., percentages, sample counts, or specific split files).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running its experiments.
Software Dependencies No The paper mentions "Python using Py Torch" but does not specify the version numbers for either Python or PyTorch, which is required for reproducible software dependencies.
Experiment Setup No The paper states "We provide all details in the appendix," indicating that specific experimental setup details like hyperparameters or system-level training settings are not present in the main text.