Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Compressed and distributed least-squares regression: convergence rates with applications to federated learning

Authors: Constantin Philippenko, Aymeric Dieuleveut

JMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental These results are validated by numerical experiments which help to get an intuition of the underlying mechanisms. The code is provided on our Git Hub repository: https: //github.com/philipco/structured_noise. We summarize hereafter the structure of the paper in Figure 1. ... 3.4 Numerical experiments on Algorithm 2 ... 4.3 Numerical experiments
Researcher Affiliation Academia Constantin Philippenko EMAIL Ecole polytechnique, Institut Polytechnique de Paris, CMAP Aymeric Dieuleveut EMAIL Ecole polytechnique, Institut Polytechnique de Paris, CMAP
Pseudocode Yes Algorithm 1 (LMS) ... Algorithm 2 (Centralized compressed LMS) ... Algorithm 3 (Distributed compressed LMS) ... Algorithm 4 (Distributed compressed LMS with control variates)
Open Source Code Yes The code is provided on our Git Hub repository: https: //github.com/philipco/structured_noise.
Open Datasets Yes On non-simulated datasets, namely quantum (Caruana et al., 2004) and cifar-10 (Krizhevsky et al., 2009)
Dataset Splits No To compute the optimal point (and so to compute the excess loss), we run SGD over 200 passes on the whole dataset and consider the last Polyak-Ruppert average as the optimal point w . For cifar-10 and quantum, we run Algorithm 2 for 5 106 iterations (it corresponds to 100 passes on the whole dataset) with a batch-size b = 16
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or cloud instance types) used for the experiments. It only describes the experimental setup in terms of datasets, algorithm parameters, and iteration counts.
Software Dependencies No The paper does not explicitly mention specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or other libraries/solvers).
Experiment Setup Yes Setting: (a) Synthetic dataset generation: The dataset is generated using Model 2 with K = 107, σ2 = 1, an optimal point w set as a constant vector of ones and a geometric eigenvalues decay of D1 = Diag (1/i4)d i=1 (resp. D2 = Diag (1/i)d i=1 )... (c) Algorithm 2: We take a constant step-size γ = 1/(2(ω + 1)R2) with R2 the trace of the features covariance, and w0 = 0 as initial point. We set the batch-size b = 1 and the compressor variance ω = 10 for synthetic datasets. For cifar-10 and quantum, we run Algorithm 2 for 5 106 iterations (it corresponds to 100 passes on the whole dataset) with a batch-size b = 16, and using a s-quantization (Definition 26). We set s = 16 for cifar-10 (factor 2 compression) and s = 8 for quantum (factor 4 compression), the compressor variance is therefore ω 1 for both datasets.