Efficient semidefinite-programming-based inference for binary and multi-class MRFs

Authors: Chirag Pabbaraju, Po-Wei Wang, J. Zico Kolter

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF by instead exploiting a recently proposed coordinate-descent-based fast semidefinite solver. We also extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver. We show that the method substantially outperforms (both in terms of solution quality and speed) the existing state of the art in approximate inference, on benchmark problems drawn from previous work. We also show that our approach can scale to large MRF domains such as fully-connected pairwise CRF models used in computer vision.
Researcher Affiliation Collaboration Chirag Pabbaraju Carnegie Mellon University cpabbara@cs.cmu.edu Po-Wei Wang Carnegie Mellon University poweiw@cs.cmu.edu J. Zico Kolter Carnegie Mellon University Bosch Center for AI zkolter@cs.cmu.edu
Pseudocode Yes Algorithm 1 Solving (10) via M 4
Open Source Code Yes Source code for our experiments is available at https://github.com/locuslab/sdp_mrf.
Open Datasets No We generate random complete graphs and Erd os-Renyi (ER) graphs. While generating ER graphs, we sample an edge in A with probability 0.5. The biases are sampled uniformly from [−1, 1]. We perform experiments on estimating the mode (Section 3) as well as Z (Section 4). The algorithms we mainly compare to in our experiments are AIS [23], Spectral Approximate Inference (Spectral AI) [26] and the method suggested by Wang et al. [35]." and "Here, we consider the setting as in Dense CRF [19] where the task is to compute the configuration of labels x ∈ [k]n for the pixels in an image". No specific link or access info for datasets used for *their experiments* is provided.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions software like "lib DAI [22]" and refers to "the Mixing method [32]", but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes As in the setup in Park et al. [26], the coupling matrices are generated as follows: for a coupling strength c, the entries on edges in A are sampled uniformly from [−c0, c0], where c0 is appropriately scaled so that CS(A) ≈ c. The biases are sampled uniformly from [−1, 1]. ... For AIS, we have 3 main parameters: (K, num cycles, num samples). We provide a description of these parameters along with complete pseudocode of our implementation of AIS in Appendix D.