Optimal Statistical Rates for Decentralised Non-Parametric Regression with Linear Speed-Up

Authors: Dominic Richards, Patrick Rebeschini

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove our results under the standard source and capacity assumptions in non-parametric regression. These assumptions relate, respectively, to the projection of the optimal predictor on the hypothesis space and to the effective dimension of this space [59, 12]. A contribution of this work is to show that proper tuning yields, up to poly-logarithmic terms, optimal non-parametric rates in decentralised learning. Theorem 1. Let Assumptions 1, 2, 3 hold with r >= 1/2 and 2r + gamma > 2. Let t be the smallest integer greater than the quantity...
Researcher Affiliation Academia Dominic Richards Department of Statistics University of Oxford 24-29 St Giles , Oxford, OX1 3LB dominic.richards@spc.ox.ac.uk Patrick Rebeschini Department of Statistics University of Oxford 24-29 St Giles , Oxford, OX1 3LB patrick.rebeschini@stats.ox.ac.uk
Pseudocode No The paper describes the Distributed Gradient Descent algorithm using mathematical equations and text in Section 2.3 ('w V Pvw ωt,w ηt 1 m ωt,w, xi,w H yi,w xi,w , (3)'), but it does not present it as structured pseudocode or in an algorithm block with a clear label.
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the described methodology.
Open Datasets No The paper defines a theoretical learning problem involving data sampled from an 'unknown probability measure' ρ, but it does not refer to or provide access information for any specific publicly available or open datasets.
Dataset Splits No This paper is theoretical and does not conduct experiments with specific datasets, therefore no dataset split information (training, validation, test) is provided.
Hardware Specification No This paper is theoretical and does not describe any experiments that would require specific hardware, therefore no hardware specifications are provided.
Software Dependencies No This paper is theoretical and does not describe any experimental setup or implementation details that would require specific software dependencies, therefore none are provided.
Experiment Setup No This paper is theoretical and does not describe an experimental setup. While it discusses theoretical tuning parameters like 'step size and number of iterations', these are within the mathematical analysis rather than concrete hyperparameter values for an empirical experiment.