Randomized Block-Diagonal Preconditioning for Parallel Learning

Authors: Celestine Mendler-Dünner, Aurelien Lucchi

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our main contribution is to demonstrate that the convergence of these methods can significantly be improved by a randomization technique which corresponds to repartitioning coordinates across tasks during the optimization procedure. We provide a theoretical analysis that accurately characterizes the expected convergence gains of repartitioning and validate our findings empirically on various traditional machine learning tasks.
Researcher Affiliation Academia 1University of California, Berkeley 2ETH Zürich.
Pseudocode Yes Algorithm 1 Block-Diagonal Preconditioning for (1) with (i) static and (ii) dynamic partitioning
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We have chosen the gisette, the mushroom and the covtype dataset that can be downloaded from (Dua & Graff, 2017)
Dataset Splits No The paper does not specify explicit training/validation/test dataset splits (e.g., percentages or counts). It mentions 'Validation of Convergence Rates' but this refers to validating theoretical results, not a dataset split.
Hardware Specification No The paper mentions 'multi-core machine with shared memory' but does not specify any exact GPU/CPU models, processor types, or memory details used for the experiments.
Software Dependencies No The paper mentions using existing algorithms like COCOA, ADN, and LS but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, specific library versions).
Experiment Setup Yes If not stated otherwise we use λ = 1. We perform this experiment for different values of K and α. ...with a fixed step size η.