The Benefits of Learning with Strongly Convex Approximate Inference

Authors: Ben London, Bert Huang, Lise Getoor

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that learning with a strongly convex free energy, using our optimization framework to guarantee a given modulus, results in substantially more accurate marginal probabilities, thereby validating our theoretical claims and the effectiveness of our framework.
Researcher Affiliation Academia Ben London BLONDON@CS.UMD.EDU University of Maryland, College Park, MD 20742 USA Bert Huang BHUANG@VT.EDU Virginia Tech, Blacksburg, VA 24061 USA Lise Getoor GETOOR@SOE.UCSC.EDU University of California, Santa Cruz, CA 95064 USA
Pseudocode No No pseudocode or algorithm blocks were found.
Open Source Code No The paper mentions using third-party tools like Schmidt's LBFGS and UGM, and states 'we use our own implementation of counting number belief propagation (CBP)', but does not provide a link or explicit statement that their own code is open source.
Open Datasets No Our synthetic data generator is based on those used in prior work (e.g., Hazan & Shashua, 2008; Meshi et al., 2009) to evaluate approximate marginal inference. We generate data from an (8x8) non-toroidal grid-structured model... The paper describes generating synthetic data but does not provide access information for it.
Dataset Splits No The paper mentions using '100 joint assignments to Y' to 'train a model' but does not specify train/validation/test splits, percentages, or cross-validation details.
Hardware Specification No No specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running experiments were mentioned.
Software Dependencies No The paper mentions using 'MATLAB', 'Mark Schmidt s Undirected Graphical Models (UGM) toolkit (2013b)', 'Schmidt s implementation of LBFGS with Wolfe line search (2013a)', and 'MATLAB s quadprog', but does not provide specific version numbers for any of these software components.
Experiment Setup Yes The regularization parameter, Λm, is set to 1/ m, per Proposition 1. We generate 20 models... For each value of ωs {0.05, 1} and ωp {0.1, 0.2, 0.5, 1, 2, 5}...