Optimal Shrinkage for Distributed Second-Order Optimization

Authors: Fangzhao Zhang, Mert Pilanci

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our approach leads to significant improvements in convergence rate compared to standard baselines and recent proposals, as shown through experiments on both real and synthetic datasets.
Researcher Affiliation Academia 1Department of Electrical Engineering, Stanford University.
Pseudocode Yes Algorithm 1 Distributed Newton s method with optimal shrinkage Algorithm 2 Distributed preconditioned conjugate gradient with optimal shrinkage
Open Source Code Yes Code for experiments is included in the submission.
Open Datasets Yes All real datasets used in this section are public and available at https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/.
Dataset Splits No The paper states that data is 'split evenly to each agent' or 'experiment with ten random permutations' but does not specify explicit train/validation/test splits with percentages or sample counts.
Hardware Specification Yes We run all experiments on google cloud n1-standard-8 machine.
Software Dependencies No The paper does not provide specific software names with version numbers, such as programming languages or libraries, to ensure reproducibility.
Experiment Setup Yes We pick m = 5, λ = 0.01,max iters= 20 for heart, m = 2, λ = 0.01,max iters= 10 for liver-disorders, m = 3, λ = 0.1,max iters= 5 for splice, m = 10, λ = 0.1,max iters= 20 for svmguide3, m = 100, λ = 1e 5,max iters= 50 for cod-rna, m = 200, λ = 1e 5,max iters= 50 for covtype, m = 40, λ = 0.01,max iters= 50 for phishing, m = 50, λ = 0.1,max iters= 50 for w8a.