Variance Reduction via Primal-Dual Accelerated Dual Averaging for Nonsmooth Convex Finite-Sums
Authors: Chaobing Song, Stephen J Wright, Jelena Diakonikolas
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments reveal competitive performance of VRPDA2 compared to state-of-the-art approaches. |
| Researcher Affiliation | Academia | 1Department of Computer Sciences, University of Wisconsin Madison, Madison, WI. |
| Pseudocode | Yes | Algorithm 1 Primal-Dual Accelerated Dual Averaging (PDA2) |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-sourcing of its own methodology's code. |
| Open Datasets | Yes | We compare VRPDA2 with two competitive algorithms SPDHG (Chambolle et al., 2018) and PURE CD (Alacaoglu et al., 2020) on standard a9a and MNIST datasets from the LIBSVM library (LIB).1 Both datasets are large, with n = 32, 561, d = 123 for a9a, and n = 60, 000, d = 780 for MNIST. [Footnote 1: LIBSVM Library. https://www.csie.ntu.edu. tw/ cjlin/libsvm/index.html. Accessed: Feb. 3, 2020.] |
| Dataset Splits | No | The paper does not explicitly provide training/validation/test dataset splits needed to reproduce the experiment. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments, such as specific GPU or CPU models. |
| Software Dependencies | No | The paper mentions the 'LIBSVM library' but does not specify its version number or other key software components with their versions. |
| Experiment Setup | Yes | For simplicity, we normalize each data sample to unit Euclidean norm, so that the Lipschitz constants appearing in the analysis (such as R0 in VRPDA2) are at most 1. We then scale these Lipschitz constants by {0.1, 0.25, 0.5, 0.75, 1}2... We fix the 1-regularization parameter λ to 10−4 and vary σ ∈ {0, 10−8, 10−4}, to represent the general convex, ill-conditioned strongly convex, and well-conditioned strongly convex settings, respectively. |