Adaptive Primal-Dual Splitting Methods for Statistical Learning and Image Processing

Authors: Tom Goldstein, Min Li, Xiaoming Yuan

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments show that adaptive PDHG has strong advantages over non-adaptive methods in terms of both efficiency and simplicity for the user.
Researcher Affiliation Academia Thomas Goldstein Department of Computer Science University of Maryland Min Li School of Economics and Management Southeast University Xiaoming Yuan Department of Mathematics Hong Kong Baptist University
Pseudocode Yes Algorithm 1 Adaptive PDHG
Open Source Code No The paper does not provide explicit access to source code or links to a repository for the methodology described.
Open Datasets Yes We test our methods on (8) using the synthetic problem suggested in [21]. [21] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58:267 288, 1994.
Dataset Splits No The paper does not provide specific details about training, validation, or test dataset splits, percentages, or sample counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running its experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup Yes We terminate the algorithms when both the primal and dual residual norms (i.e. kpkk and kdkk) are smaller than 0.05. In our implementation we use 0 = = .95. We use c = 0.9 in our experiments.