Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
MAST: model-agnostic sparsified training
Authors: Yury Demidovich, Grigory Malinovsky, Egor Shulgin, Peter Richtarik
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To empirically validate our theoretical framework and its implications, we focus on carefully controlled settings that satisfy the assumptions of our work. Specifically, we consider an β2-regularized logistic regression optimization problem with the a5a dataset from the Lib SVM repository (Chang & Lin, 2011). See Appendix H for further details and Appendix H.2 for more results on other methods and sketches. In Figure 1, we compare the test accuracy of sparsified solutions for the standard (ERM) problem (1) and introduced MAST formulation (2). Visualization is performed using the boxplot method from Seaborn (version 0.11.0) library (Waskom, 2021) with default parameters. For ERM, we find the exact (up to machine precision) optimum, which is subsequently used for the accuracy evaluation. For the MAST optimization problem, we run DSGD with exact sketched gradient ππfor every sparsity level. |
| Researcher Affiliation | Academia | Yury Demidovich Grigory Malinovsky Egor Shulgin Peter RichtΓ‘rik King Abdullah University of Science and Technology (KAUST), Saudi Arabia* *Contacts: EMAIL |
| Pseudocode | Yes | Algorithm 1 Double Sketched (S)GD 1: Parameters: learning rate πΎ> 0; distribution π; initial model and shift π₯0, π£ Rπ. 2: for π‘= 0, 1, 2 . . . do 3: Sample a sketch: Sπ‘ π 4: Form a gradient estimator: ππ‘= { πSπ‘(π₯π‘) exact (I) πSπ‘(π₯π‘) (stochastic) inexact (II) 5: Perform a gradient-type step: π₯π‘+1 = π₯π‘ πΎππ‘ |
| Open Source Code | No | Our implementation is based on the public Github repository of Konstantin Mishchenko. Simulations were performed on a machine with 24 Intel(R) Xeon(R) Gold 6246 CPU @ 3.30 GHz. |
| Open Datasets | Yes | Specifically, we consider an β2-regularized logistic regression optimization problem with the a5a dataset from the Lib SVM repository (Chang & Lin, 2011). |
| Dataset Splits | Yes | The dataset is shuffled and split to train and test in 0.75 to 0.25 proportions. |
| Hardware Specification | Yes | Simulations were performed on a machine with 24 Intel(R) Xeon(R) Gold 6246 CPU @ 3.30 GHz. |
| Software Dependencies | Yes | Visualization is performed using the boxplot method from Seaborn (version 0.11.0) library (Waskom, 2021) with default parameters. 2We use array split method from Num Py (version 1.26.2) package (Harris et al., 2020). |
| Experiment Setup | Yes | Regularization parameter πin (46) is set to guarantee that the condition number of the loss function π πis equal to 102. The dataset is shuffled and split to train and test in 0.75 to 0.25 proportions. Initial model weights π₯0 Rπare generated from a standard Gaussian distribution π©(0, 1). For every sparsity level Perm-K sketches π= {S1, . . . , SπΎ} are generated leading to different MAST problem formulations. This process is repeated 10 times with various permutations πfor changing random seeds. After ERM and MAST models π₯are obtained the accuracy of sparsified model Sππ₯is calculated for every π [πΎ] on the test set. This results in distributions of accuracies for every sparsity level. |