Asynchronous Parallel Coordinate Minimization for MAP Inference
Authors: Ofer Meshi, Alexander Schwing
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our numerical evaluations show that this approach indeed achieves significant speedups in common computer vision tasks. We illustrate the performance of our algorithm both on synthetic models and on a disparity estimation task from computer vision. |
| Researcher Affiliation | Collaboration | Google meshi@google.com Alexander G. Schwing Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign aschwing@illinois.edu |
| Pseudocode | Yes | Algorithm 1 Block Coordinate Minimization |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code release for the described methodology. |
| Open Datasets | Yes | It has 144x185 = 26,640 unary regions with 8 states and is a downsampled version from Schwing et al. [2011]. |
| Dataset Splits | No | The paper does not provide specific details on train/validation/test dataset splits (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper mentions 'computing resources provided by the Innovative Systems Lab (ISL) at NCSA' but does not specify exact hardware models (e.g., GPU/CPU models, memory details) used for the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names with specific versions). |
| Experiment Setup | Yes | We use the temperature parameter γ = 1 for the smooth objective (Eq. (3)). We perform this function evaluation a fixed number of times, either 200 or 400 times. We examine the behavior when using one to 46 threads. The stepsize parameter, necessary in the case of HOGWILD!, is chosen to be as large as possible while still ensuring convergence (following Recht et al. [2011]). |