Approximate Kernel Selection with Strong Approximate Consistency

Authors: Lizhong Ding, Yong Liu, Shizhong Liao, Yu Li, Peng Yang, Yijie Pan, Chao Huang, Ling Shao, Xin Gao3462-3469

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we empirically evaluate the approximate consistency of our algorithm as compared to state-of-the-art methods.
Researcher Affiliation Academia 1Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE 2King Abdullah University of Science and Technology (KAUST), Saudi Arabia 3Institute of Information Engineering, CAS, China 4Tianjin University, China 5Ningbo Institute of Computing Technology, CAS, China 6Ningbo Institute of Information Technology Application, CAS, China
Pseudocode Yes Algorithm 1 Nystr om Approximate Kernel Selection; Algorithm 2 Update Probability
Open Source Code No The paper states "All the implementations are in the R language." but does not provide any link or explicit statement about the code being open-source or publicly available.
Open Datasets Yes We conduct experiments on benchmark data sets from UCI repository4 and LIBSVM Data5. 4http://www.ics.uci.edu/ mlearn/MLRepository.html 5http://www.csie.ntu.edu.tw/ cjlin/libsvm
Dataset Splits No The paper does not specify exact training, validation, or test split percentages or sample counts. It only mentions the total number of examples 'l'.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper states "All the implementations are in the R language." but does not provide specific version numbers for R or any other libraries/solvers used.
Experiment Setup Yes We set the sampling size c = 0.2l and the adaptive sampling size s = 0.1c. ... we just set µ = 0.005. ... we adopt Gaussian kernels κ(x, x ) = exp γ x x 2 2 with variable width γ as our candidate kernel set K.