Coordinate Linear Variance Reduction for Generalized Linear Programming
Authors: Chaobing Song, Cheuk Yin Lin, Stephen Wright, Jelena Diakonikolas
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We complement our theoretical guarantees with numerical experiments that verify our algorithm s practical effectiveness in terms of wall-clock time and number of data passes. |
| Researcher Affiliation | Academia | Chaobing Song University of Wisconsin-Madison chaobing.song@wisc.edu Cheuk Yin Lin University of Wisconsin-Madison cylin@cs.wisc.edu Stephen J. Wright University of Wisconsin-Madison swright@cs.wisc.edu Jelena Diakonikolas University of Wisconsin-Madison jelena@cs.wisc.edu |
| Pseudocode | Yes | Algorithm 1 Coordinate Linear Variance Reduction (CLVR) |
| Open Source Code | Yes | Our code is available at https://github.com/ericlincc/Efficient-GLP. |
| Open Datasets | Yes | We use standard datasets from LibSVM [17]: a9a, gisette, rcv1, and news20. For datasets with 1/0 labels, we convert them to 1/-1 labels for compatibility with logistic regression formulation. |
| Dataset Splits | No | The paper mentions using 'standard datasets from LibSVM' but does not explicitly provide the specific training, validation, and test splits used for the experiments. It only states 'Full details of the experimental setup can be found in Appendix D' and in Appendix D.2 it lists the datasets without detailing the splits. |
| Hardware Specification | Yes | All experiments were conducted on a single machine with an AMD Ryzen Threadripper 3970X 32-Core Processor and 256GB of RAM. |
| Software Dependencies | Yes | Our code is written in Julia 1.7. For production solvers, we used Gurobi [26] (version 9.5.1 for the benchmark), CPLEX [26] and Mosek [8] via JuMP interface [36] with default settings. |
| Experiment Setup | Yes | We tuned the γ parameter for CLVR (Algorithm 1) from {0.1, 1, 10, 100, 1000} and found that γ = 1 works best for a9a, γ = 0.1 for gisette, and γ = 10 for rcv1 and news20. Specifically, for each restart of CLVR and SPDHG, we run for 100 iterations. For PURE-CD, we run for 100 epochs, where each epoch sweeps through all the coordinates once. For all algorithms, we use 10 warmup iterations before starting the restart strategy. |