Divide-and-Conquer Learning with Nyström: Optimal Rate and Algorithm

Authors: Rong Yin, Yong Liu, Lijing Lu, Weiping Wang, Dan Meng6696-6703

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on several real-world large-scale datasets containing up to 1M data points show that DC-NY significantly outperforms the state-of-the-art approximate KRLS estimates.
Researcher Affiliation Academia Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China1 School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China2
Pseudocode Yes Algorithm 1 Divide-and-Conquer KRLS with Nystr om (DC-NY)
Open Source Code No No, the paper provides links to code for baseline methods (RF, NY, DC-RF) but not for their proposed DC-NY algorithm.
Open Datasets Yes The comparative experiments are based on four real-world datasets: SUSY, HIGGS, Year Prediction MSD and covtype, from website 1.
Dataset Splits Yes We randomly sample 1 106 data points on SUSY and HIGGS, use the whole of Year Prediction MSD and covtype, and then randomly divide each experimental dataset into training set and prediction set, of which the training set accounts for 70%.
Hardware Specification Yes Each experiment is measured on a server with 2.40GHz Intel(R) Xeon(R) E52630 v3 CPU and 32 GB of RAM in Matlab.
Software Dependencies No No, the paper only mentions 'Matlab' without a specific version number and does not list other software dependencies with version details.
Experiment Setup Yes For ensuring fairness, we use the same way to tune parameters σ in 2[ 2:+0.5:10] and λ in 2[ 21:+1:3], on each dataset and algorithm.