Efficient Approximation of Cross-Validation for Kernel Methods using Bouligand Influence Function
Authors: Yong Liu, Shali Jiang, Shizhong Liao
ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our approximate cross-validation criterion is sound and efficient. |
| Researcher Affiliation | Academia | Yong Liu YONGLIU@TJU.EDU.CN Shali Jiang SLJIANG@TJU.EDU.CN Shizhong Liao SZLIAO@TJU.EDU.CN School of Computer Science and Technology, Tianjin University, Tianjin 300072, P. R. China |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | Yes | The evaluation is made on 20 publicly available data sets from LIBSVM Data: 10 data sets for classification and 10 data sets for regression seen in Table 1. |
| Dataset Splits | Yes | For each data set, we have run all the methods 10 times with training and testing data sets be split randomly (50% of all the examples for training and the other 50% for testing). ... For each training set, we choose the τ and λ by cross validation on the training set. |
| Hardware Specification | Yes | Experiments are performed on a Dell Vestro PC with 3.4-GHz 8-core CPU and 8-GB memory. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers. |
| Experiment Setup | Yes | We use K(x, x ) = exp( x x 2 2/2τ) as our candidate kernels, τ {2i, i = 6, 5, . . . , 7, 8} 2. The regularization parameter λ {2i, i = 7, 6, . . . , 2}. |