Barrier Frank-Wolfe for Marginal Inference
Authors: Rahul G. Krishnan, Simon Lacoste-Julien, David Sontag
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we demonstrate the increased quality of results found by tightening the relaxation over the marginal polytope as well as the spanning tree polytope on synthetic and real-world instances. |
| Researcher Affiliation | Academia | Rahul G. Krishnan Courant Institute New York University Simon Lacoste-Julien INRIA Sierra Project-Team Ecole Normale Sup erieure, Paris David Sontag Courant Institute New York University |
| Pseudocode | Yes | Alg. 2 describes the pseudocode for our proposed algorithm to do marginal inference with TRW( µ; θ, ρ). |
| Open Source Code | Yes | Code is available at https://github.com/clinicalml/fw-inference. |
| Open Datasets | Yes | Restricted Boltzmann Machines (RBMs): From the Probabilistic Inference Challenge 2011.3 ... Horses: Large (N ~12000) MRFs representing images from the Weizmann Horse Data (Borenstein and Ullman, 2002) ... Chinese Characters: An image completion task from the KAIST Hanja2 database, compiled in Open GM by Andres et al. (2012). |
| Dataset Splits | No | The paper does not explicitly state training, validation, and test dataset splits with percentages or counts. It mentions '5x5 grids' and '10 node cliques' as test cases, and describes running trials and aggregating results, but not specific data partitioning for model training or validation. |
| Hardware Specification | No | The paper does not specify the hardware used for running experiments, only mentioning general terms like "run in parallel" for solvers. |
| Software Dependencies | No | The paper mentions software like "QPBO", "TRW-S", "ICM using Open GM", "Gurobi Optimization", "toulbar2", and "lib DAI", but does not provide specific version numbers for any of them. It only cites the year of associated papers or manuals. |
| Experiment Setup | Yes | We perform 10 updates to ρ, optimize µ to a duality gap of 0.5 on M, and always perform correction steps. We use a non-uniform initialization of ρ computed with the Matrix Tree Theorem... We run TRBP for 1000 iterations using damping = 0.9... |