Lost Relatives of the Gumbel Trick

Authors: Matej Balog, Nilesh Tripuraneni, Zoubin Ghahramani, Adrian Weller

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted experiments with the following aims:
Researcher Affiliation Collaboration 1University of Cambridge, UK 2MPI-IS, T ubingen, Germany 3UC Berkeley, USA 4Uber AI Labs, USA 5Alan Turing Institute, UK.
Pseudocode Yes Algorithm 1 Sequential sampler for Gibbs distribution
Open Source Code Yes Code: https://github.com/matejbalog/gumbel-relatives.
Open Datasets Yes Figure 4 shows the MSEs of U( ) as estimators of ln Z on 10 10 (n = 100) binary pairwise grid models with unary potentials sampled uniformly from [ 1, 1] and pairwise potentials from [0, C] (attractive models) or from [ C, C] (mixed models), for varying coupling strengths C.
Dataset Splits No The paper does not provide specific training/validation/test dataset splits with percentages or sample counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions "using lib DAI (Mooij, 2010)" but does not specify a version number for this or any other software dependency.
Experiment Setup Yes Figure 4 shows the MSEs of U( ) as estimators of ln Z on 10 10 (n = 100) binary pairwise grid models with unary potentials sampled uniformly from [ 1, 1] and pairwise potentials from [0, C] (attractive models) or from [ C, C] (mixed models), for varying coupling strengths C. We replaced the expectations in U( ) s with sample averages of size M = 100, using lib DAI (Mooij, 2010) to solve the MAP problems yielding these samples.