Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

MAST: model-agnostic sparsified training

Authors: Yury Demidovich, Grigory Malinovsky, Egor Shulgin, Peter Richtarik

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To empirically validate our theoretical framework and its implications, we focus on carefully controlled settings that satisfy the assumptions of our work. Specifically, we consider an β„“2-regularized logistic regression optimization problem with the a5a dataset from the Lib SVM repository (Chang & Lin, 2011). See Appendix H for further details and Appendix H.2 for more results on other methods and sketches. In Figure 1, we compare the test accuracy of sparsified solutions for the standard (ERM) problem (1) and introduced MAST formulation (2). Visualization is performed using the boxplot method from Seaborn (version 0.11.0) library (Waskom, 2021) with default parameters. For ERM, we find the exact (up to machine precision) optimum, which is subsequently used for the accuracy evaluation. For the MAST optimization problem, we run DSGD with exact sketched gradient π‘“π’Ÿfor every sparsity level.
Researcher Affiliation Academia Yury Demidovich Grigory Malinovsky Egor Shulgin Peter RichtΓ‘rik King Abdullah University of Science and Technology (KAUST), Saudi Arabia* *Contacts: EMAIL
Pseudocode Yes Algorithm 1 Double Sketched (S)GD 1: Parameters: learning rate 𝛾> 0; distribution π’Ÿ; initial model and shift π‘₯0, 𝑣 R𝑑. 2: for 𝑑= 0, 1, 2 . . . do 3: Sample a sketch: S𝑑 π’Ÿ 4: Form a gradient estimator: 𝑔𝑑= { 𝑓S𝑑(π‘₯𝑑) exact (I) 𝑔S𝑑(π‘₯𝑑) (stochastic) inexact (II) 5: Perform a gradient-type step: π‘₯𝑑+1 = π‘₯𝑑 𝛾𝑔𝑑
Open Source Code No Our implementation is based on the public Github repository of Konstantin Mishchenko. Simulations were performed on a machine with 24 Intel(R) Xeon(R) Gold 6246 CPU @ 3.30 GHz.
Open Datasets Yes Specifically, we consider an β„“2-regularized logistic regression optimization problem with the a5a dataset from the Lib SVM repository (Chang & Lin, 2011).
Dataset Splits Yes The dataset is shuffled and split to train and test in 0.75 to 0.25 proportions.
Hardware Specification Yes Simulations were performed on a machine with 24 Intel(R) Xeon(R) Gold 6246 CPU @ 3.30 GHz.
Software Dependencies Yes Visualization is performed using the boxplot method from Seaborn (version 0.11.0) library (Waskom, 2021) with default parameters. 2We use array split method from Num Py (version 1.26.2) package (Harris et al., 2020).
Experiment Setup Yes Regularization parameter πœ†in (46) is set to guarantee that the condition number of the loss function πœ…π‘“is equal to 102. The dataset is shuffled and split to train and test in 0.75 to 0.25 proportions. Initial model weights π‘₯0 R𝑑are generated from a standard Gaussian distribution 𝒩(0, 1). For every sparsity level Perm-K sketches π’Ÿ= {S1, . . . , S𝐾} are generated leading to different MAST problem formulations. This process is repeated 10 times with various permutations πœ‹for changing random seeds. After ERM and MAST models π‘₯are obtained the accuracy of sparsified model S𝑖π‘₯is calculated for every 𝑖 [𝐾] on the test set. This results in distributions of accuracies for every sparsity level.