Scalable Variational Inference in Log-supermodular Models

Authors: Josip Djolonga, Andreas Krause

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments confirm scalability of our approach, high quality of the marginals, and the benefit of incorporating higher-order potentials.
Researcher Affiliation Academia Josip Djolonga JOSIPD@INF.ETHZ.CH Andreas Krause KRAUSEA@ETHZ.CH Department of Computer Science, ETH Zurich
Pseudocode No No explicit pseudocode or algorithm blocks were found in the paper.
Open Source Code No The code will be made available at http://people.inf.ethz.ch/josipd/.
Open Datasets Yes We used the data from Jegelka & Bilmes (2011), which contains a total of 36 images, each with a highly detailed (pixel-level precision) ground truth segmentation.
Dataset Splits Yes Then, we performed a leave-one-out crossvalidation for estimating the average AUC.
Hardware Specification No The reported typical running times are for an image of size 427x640 pixels on a quad core machine and we report the wall clock time of the inference code (without setting up the factor graph or generating the superpixels). This does not specify CPU/GPU model or other details.
Software Dependencies No We have used the implementation from lib DAI (Mooij, 2010). and solved using the total variation Douglas-Rachford (DR) code from (Barbero & Sra, 2011; 2014; Jegelka et al., 2013). Neither specifies version numbers for these software components.
Experiment Setup Yes The maximum number of iterations was set to 70. (for BP and MF) and We ran for at most 100 iterations. (for DR) and For every method we tested several variants using different combinations for α, β, γ and θ (exact numbers provided in the appendix).