Fast Cross-Validation
Authors: Yong Liu, Hailun Lin, Lizhong Ding, Weiping Wang, Shizhong Liao
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on lots of datasets show that our approximate CV has no statistical discrepancy with the original CV, but can significantly improve the efficiency. |
| Researcher Affiliation | Academia | Yong Liu1, Hailun Lin1 , Lizhong Ding2, Weiping Wang1, Shizhong Liao3 1Institute of Information Engineering, Chinese Academy of Sciences 2King Abdullah University of Science and Technology (KAUST) 3Tianjin University |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about making the source code for their methodology available, nor does it include a link to a code repository. |
| Open Datasets | Yes | The data sets are 18 publicly available data sets from LIBSVM Data1: 9 data sets for classification and 9 data sets for regression. 1http://www.csie.ntu.edu.tw/~cjlin/libsvm. |
| Dataset Splits | Yes | For each data set, we run all methods 50 times with randomly selected 70% of all data for training and the other 30% for testing. |
| Hardware Specification | Yes | Experiments are performed on a PC of 3.1GHz CPU with 4GB memory. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | We use the Gaussian kernel κ(x, x ) = exp( x x 2 2/2σ) as our candidate kernel σ {2i, i = 15, 14, . . . , 14, 15}. The regularization parameter λ {2i, i = 15, 13, . . . , 13, 15}. For our methods, we set h = 0.05 and c = 0.1n. |