Smooth and Strong: MAP Inference with Linear Convergence
Authors: Ofer Meshi, Mehrdad Mahdavi, Alex Schwing
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We now proceed to evaluate the proposed methods on real and synthetic data and compare them to existing state-of-the-art approaches. We begin with a synthetic model adapted from Kolmogorov [10]. |
| Researcher Affiliation | Academia | Ofer Meshi TTI Chicago, Mehrdad Mahdavi TTI Chicago, Alexander G. Schwing University of Toronto |
| Pseudocode | Yes | Algorithm 1 Block-coordinate Frank-Wolfe for soft-constrained primal |
| Open Source Code | No | The paper does not provide any statement or link regarding the public release of source code for the methodology described. |
| Open Datasets | Yes | We next conduct experiments on real data from a protein side-chain prediction problem from Yanover et al. [33]. Specifically, we use the Weizmann Horse dataset for foreground-background segmentation [2]. |
| Dataset Splits | No | The paper mentions a train/test split for the Weizmann Horse dataset ("50 images to learn the parameters of the model and the other 278 images to test inference") but does not explicitly provide details for a validation set or other explicit split information for all experiments. |
| Hardware Specification | No | The paper explicitly states it abstracts away hardware details, saying: 'In order to abstract away these details we compare the iteration cost of the vanilla versions of all algorithms.' No specific hardware (e.g., GPU/CPU models, memory) is mentioned for the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9, CPLEX 12.4) that were used to conduct the experiments. |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as hyperparameter values (e.g., learning rates, batch sizes, epochs), optimizer settings, or other system-level training configurations. |