A Semismooth Newton Method for Fast, Generic Convex Programming

Authors: Alnur Ali, Eric Wong, J. Zico Kolter

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, Newton-ADMM is significantly faster than SCS on a number of problems. The paper includes a dedicated section
Researcher Affiliation Academia 1Machine Learning Department, Carnegie Mellon University 2Computer Science Department, Carnegie Mellon University.
Pseudocode Yes Algorithm 1 Newton-ADMM for convex optimization
Open Source Code No The paper does not provide any links or explicit statements about the availability of its source code.
Open Datasets No The paper states that for its numerical examples, data was
Dataset Splits No The paper does not explicitly provide details about training/validation/test dataset splits for reproducibility.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., CPU, GPU models, or cloud computing instances).
Software Dependencies No The paper mentions various existing software tools and frameworks (e.g.,
Experiment Setup Yes The method has essentially no tuning parameters, since, for all the experiments, we just fix the maximum number of Newton iterations T = 100; the backtracking line search parameters α = 0.001, β = 0.5; and the GMRES tolerances ε(i) = 1/(i + 1), for each Newton iteration i.