Lifted Generalized Dual Decomposition

Authors: Nicholas Gallo, Alexander Ihler

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results show the superiority of the objective based refinement criteria, and good anytime performance compared to methods that exploit exact symmetries.
Researcher Affiliation Academia Nicholas Gallo University of California Irvine Irvine, CA 92637-3435 ngallo1@uci.edu Alexander Ihler University of California Irvine Irvine, CA 92637-3435 ihler@ics.uci.edu
Pseudocode Yes Algorithm 1 Syntactic coarsest stable partition. Algorithm 2 Coarse to fine inference.
Open Source Code No The paper does not include an unambiguous statement or a direct link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper describes the models used (Complete Graph, Binary Collective Classification, Clique-Cycle) and how their parameters (b, u terms, lxy) are set or randomized for experiments. However, it does not refer to or provide access information for a pre-existing, named, publicly available dataset in the typical sense (e.g., MNIST, CIFAR-10) with citations or links.
Dataset Splits No The paper mentions 'test models' and 'varying its size' but does not specify exact split percentages or sample counts for training, validation, and test sets. It does not reference predefined splits with citations or describe a splitting methodology.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running its experiments.
Software Dependencies No The paper mentions performing 'LBFGS black box optimization' but does not provide specific version numbers for any software, libraries, or solvers used in the experiments.
Experiment Setup Yes In our experiments, we set ϵ = 10 3, t = 10 2 (in the definition of a(x)) and perform an LBFGS black box optimization with rank 20 Hessian correction. We found it worked well to perform a small number of inference iterations (30), followed by a small amount of model refinement (setting β = 1.25 in Algorithm 2).