Linear Kernel Tests via Empirical Likelihood for High-Dimensional Data

Authors: Lizhong Ding, Zhi Liu, Yu Li, Shizhong Liao, Yong Liu, Peng Yang, Ge Yu, Ling Shao, Xin Gao3454-3461

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conduct a series of experiments to evaluate the performance of our ELR statistics as compared to state-of-the-art linear statistics.
Researcher Affiliation Collaboration 1Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE 2King Abdullah University of Science and Technology (KAUST), Saudi Arabia 3University of Macau, China, 4Tianjin University, China, 5Institute of Information Engineering, CAS, China 6Technology and Engineering Center for Space Utilization, CAS, China
Pseudocode No The paper describes methods algorithmically but does not include any explicit pseudocode blocks or figures.
Open Source Code No The paper states 'All implementations are in Python and R' but does not provide any link to source code or explicitly state that it is open-sourced.
Open Datasets Yes The first set of experiments are conducted on two Gaussians p(x) = N(x|0, Id) and q(x) = N(x|0, v Id)...
Dataset Splits No The paper mentions 'n' samples for experiments but does not specify how data was split into training, validation, or test sets.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper states 'All implementations are in Python and R' but does not provide specific version numbers for these languages or any libraries used.
Experiment Setup Yes Because Gaussian kernels are universal (Steinwart 2001), we adopt Gaussian kernels κ(x, x ) = exp γ x x 2 2 with variable width γ {2 10, 2 9, . . . , 210} as our candidate kernel set. For all evaluations, we set the significance level α = 0.05. All experiments are repeated 100 times.