Decentralised Learning with Random Features and Distributed Gradient Descent

Authors: Dominic Richards, Patrick Rebeschini, Lorenzo Rosasco

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present simulations that show how the number of Random Features, iterations and samples impact predictive performance. For our experiments we consider subsets of the SUSY data set (Baldi et al., 2014), as well as single machine and Distributed Gradient Descent with a fixed step size = 1.
Researcher Affiliation Academia 1Department of Statistics, University of Oxford, 24-29 St Giles , Oxford, OX1 3LB 2Ma LGa Center, Universit a degli Studi di Genova, Genova, Italy 3Istituto Italiano di Tecnologia, Via Morego, 30, Genoa 16163, Italy 4Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
Pseudocode No The paper describes the algorithm steps in text (e.g., "for v 2 V , agents update their iterates for t 1...") but does not present a structured pseudocode or algorithm block.
Open Source Code No The paper does not contain any statement or link indicating the availability of open-source code for the described methodology.
Open Datasets Yes For our experiments we consider subsets of the SUSY data set (Baldi et al., 2014)
Dataset Splits No The paper states "Test size of 10^4 was used" but does not provide specific details on training or validation splits, such as percentages, sample counts, or references to predefined splits.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, memory, or cloud instance types used for running the experiments.
Software Dependencies No The paper does not provide specific software dependency details, such as library names with version numbers (e.g., Python, PyTorch, scikit-learn versions).
Experiment Setup Yes For our experiments we consider subsets of the SUSY data set (Baldi et al., 2014), as well as single machine and Distributed Gradient Descent with a fixed step size = 1. M = 300. tuning parameter associated to the bandwidth (fixed to = 10 1/2).