Fast ADMM Algorithm for Distributed Optimization with Adaptive Penalty

Authors: Changkyu Song, Sejong Yoon, Vladimir Pavlovic

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We first analyze and compare the proposed methods (ADMM-VP, ADMM-AP, ADMM-NAP, ADMM-VP + AP, ADMM-VP + NAP) with the baseline method using synthetic data. Next, we apply our method to a distributed structure from motion problem using two benchmark real world datasets.
Researcher Affiliation Academia Changkyu Song, Sejong Yoon and Vladimir Pavlovic Rutgers, The State University of New Jersey 110 Frelinghuysen Road Piscataway, NJ 08854-8019 {cs1080, sjyoon, vladimir}@cs.rutgers.edu
Pseudocode No The overall algorithmic steps for the D-PPCA with Network Adaptive Penalty is summarized in (Song, Yoon, and Pavlovic 2015). This paper does not contain pseudocode or an algorithm block.
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes We tested the performance of our method on five objects of Caltech Turntable (Moreels and Perona 2007) and Hopkins 155 (Tron and Vidal 2007) dataset
Dataset Splits No The paper mentions generating synthetic data and using real-world datasets but does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, and testing.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Unless noted otherwise, we used η0 = 10. We generated 500 samples of 20 dimensional observations from a 5-dim subspace following N(0, I), with the Gaussian measurement noise following N(0, 0.2 I). For the distributed settings, the samples are assigned to each node evenly. All experiments are ran with 20 independent random initializations. and To assess convergence, we compare the relative change of (12) to a fixed threshold (10 3 in this case) for the D-PPCA experiments as in (Yoon and Pavlovic 2012).