Effective Distributed Learning with Random Features: Improved Bounds and Algorithms

Authors: Yong Liu, Jiankun Liu, Shuqiang Wang

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we validate our theoretical findings by performing experiments on both simulated and real datasets. Numerical Experiments. Inspired by numerical experiments in (Rudi & Rosasco, 2017; Li et al., 2019e), we consider a spline kernel of order q... and Real Data. In this experiment, we consider the performance on real data. We use 6 publicly available datasets from LIBSVM Data.
Researcher Affiliation Academia Yong Liu1,2, Jiankun Liu3, Shuqiang Wang4 1Gaoling School of Artificial Intelligence, Renmin University of China 2Beijing Key Laboratory of Big Data Management and Analysis Methods 3Institute of Information Engineering, Chinese Academy of Sciences 4Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
Pseudocode Yes Algorithm 1 Distributed KRR with Random Features and Communications (DKRR-RF-CM)
Open Source Code No The paper does not provide a statement about releasing its own source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes We use 6 publicly available datasets from LIBSVM Data2. and Footnote 2: http://www.csie.ntu.edu.tw/ cjlin/libsvm.
Dataset Splits Yes We generate 10000 samples for training and 10000 samples for testing. and ...and fine tune λ around |D| 1/2 using 5-fold cross validation1...
Hardware Specification No The paper does not provide any specific details about the hardware used for the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes We set the size of the random features to be M = p|D|, and fine tune λ around |D| 1/2 using 5-fold cross validation1, the tuned set is {2 5, 2 3, . . . 25}|D| 1/2. and The empirical evaluations with Gaussian kernel, exp( x x 2/σ), are given in Figure 2, where the optimal σ and λ are selected by 5-fold cross-validation, σ {2i, i = 10, 8, . . . , 10}, {2 5, 2 3, . . . 25}|D| 1/2, and the number of random features is 2 p|D|.