Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Adaptive Primal-Dual Splitting Methods for Statistical Learning and Image Processing

Authors: Tom Goldstein, Min Li, Xiaoming Yuan

NeurIPS 2015 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments show that adaptive PDHG has strong advantages over non-adaptive methods in terms of both ef๏ฌciency and simplicity for the user.
Researcher Affiliation Academia Thomas Goldstein Department of Computer Science University of Maryland Min Li School of Economics and Management Southeast University Xiaoming Yuan Department of Mathematics Hong Kong Baptist University
Pseudocode Yes Algorithm 1 Adaptive PDHG
Open Source Code No The paper does not provide explicit access to source code or links to a repository for the methodology described.
Open Datasets Yes We test our methods on (8) using the synthetic problem suggested in [21]. [21] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58:267 288, 1994.
Dataset Splits No The paper does not provide specific details about training, validation, or test dataset splits, percentages, or sample counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running its experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup Yes We terminate the algorithms when both the primal and dual residual norms (i.e. kpkk and kdkk) are smaller than 0.05. In our implementation we use 0 = = .95. We use c = 0.9 in our experiments.