Asynchronous Doubly Stochastic Sparse Kernel Learning

Authors: Bin Gu, Miao Xin, Zhouyuan Huo, Heng Huang

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Importantly, the experimental results on various large-scale real-world datasets show that, our Asy DSSKL method has the significant superiority on the computational efficiency at the training and predicting steps over the existing kernel methods.
Researcher Affiliation Academia Bin Gu,1 Xin Miao,2 Zhouyuan Huo,1 Heng Huang1* 1Department of Electrical & Computer Engineering, University of Pittsburgh, USA 2Dept. of Computer Science and Engineering, University of Texas at Arlington, USA big10@pitt.edu, xin.miao@mavs.uta.edu, zhouyuan.huo@pitt.edu, heng.huang@pitt.edu
Pseudocode Yes Algorithm 1 Asynchronous sparse random feature learning framework, Algorithm 2 Asynchronous doubly stochastic sparse kernel learning algorithm (Asy DSSKL)
Open Source Code No The paper states 'We implement our Asy DSSKL in C++', but does not provide any specific link or explicit statement about releasing the source code for Asy DSSKL.
Open Datasets Yes Datasets: Table 3 summarizes the six large-scale real-world datasets used in our experiments. They are the Covtype B, RCV1, SUSY, Covtype M, MNIST and Aloi datasets which are from https://www.csie.ntu.edu.tw/ cjlin/ libsvmtools/datasets/.
Dataset Splits No The paper mentions 'training set' and 'testing set' but does not specify explicit train/validation/test dataset splits by percentage, absolute sample counts, or refer to predefined standard splits.
Hardware Specification Yes Our experiments are performed on a 32-core two-socket Intel Xeon E5-2699 machine where each socket has 16 cores.
Software Dependencies No The paper mentions 'C++' and 'Open MP' for implementation but does not specify their version numbers or any other software dependencies with version numbers.
Experiment Setup Yes In the experiments, the value of steplength γ is selected from {102; 10; 1; 10 1; 10 2; 10 3; 10 4; 10 5}. The # of inner loop iterations m is set as the size of training set, and the # of outer loop iterations S is set as 10.