Sampling from Probabilistic Submodular Models

Authors: Alkis Gotovos, Hamed Hassani, Andreas Krause

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also evaluate the efficiency of the Gibbs sampler on three examples of such models, and compare against a recently proposed variational approach. 5 Experiments We compare the Gibbs sampler against the variational approach proposed by Djolonga and Krause [6] for performing inference in models of the form (1), and use the same three models as in their experiments. Figure 1 compares the average absolute error of the approximate marginals with respect to the exact ones.
Researcher Affiliation Academia Alkis Gotovos ETH Zurich alkisg@inf.ethz.ch S. Hamed Hassani ETH Zurich hamed@inf.ethz.ch Andreas Krause ETH Zurich krausea@ethz.ch
Pseudocode Yes Algorithm 1 Gibbs sampler Input: Ground set V , distribution p(S) exp(βF(S)) 1: X0 random subset of V 2: for t = 0 to Niter do 3: v Unif(V ) 4: F (v|Xt) F(Xt {v}) F(Xt \ {v}) 5: padd exp(β F (v|Xt))/(1 + exp(β F (v|Xt))) 6: z Unif([0, 1]) 7: if z padd then Xt+1 Xt {v} else Xt+1 Xt \ {v} 8: end for
Open Source Code No The paper does not provide any specific links or statements about the availability of open-source code for the methodology described.
Open Datasets Yes The model is constructed from randomly subsampling real data from a problem of sensor placement in a water distribution network [22].
Dataset Splits No The paper describes discarding samples as 'burn-in' for the Gibbs sampler, which is an MCMC practice, but does not provide details on traditional training/validation dataset splits used for model learning or hyperparameter tuning.
Hardware Specification No The paper does not specify any particular hardware (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes We run the Gibbs sampler for 100, 500, and 2000 iterations on each problem instance. In compliance with recommended MCMC practice [11], we discard the first half of the obtained samples as burn-in, and only use the second half for estimating the marginals.