Fast Cross-Validation for Incremental Learning
Authors: Pooria Joulani, Andras Gyorgy, Csaba Szepesvari
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments with stateof-the-art incremental learning algorithms confirm the practicality of the proposed method. |
| Researcher Affiliation | Academia | Pooria Joulani Andr as Gy orgy Csaba Szepesv ari Department of Computing Science, University of Alberta Edmonton, AB, Canada {pooria,gyorgy,szepesva}@ualberta.ca |
| Pseudocode | Yes | Algorithm 1 TREECV s, e, ˆfs..e |
| Open Source Code | No | The paper does not provide explicit statements or links to open-source code for the described methodology. |
| Open Datasets | Yes | We used datasets from the UCI repository [Lichman, 2013], downloaded from the Lib SVM website [Chang and Lin, 2011]. |
| Dataset Splits | Yes | The general recipe for computing CV estimate is to run a learning algorithm separately for each CV fold, a computationally expensive process. ...k-fold cross-validation (k-CV): the dataset is partitioned into k subsets of approximately equal size, and each subset is used to evaluate a model trained on the k 1 other subsets to produce a numerical score; the k-CV performance estimate is then obtained as the average of the obtained scores. |
| Hardware Specification | Yes | The tests were run on a single core of a computer with an Intel Xeon E5430 processor and 20 GB of RAM. |
| Software Dependencies | No | The algorithms were implemented in Python/Cython and Numpy. No specific version numbers for these software dependencies are provided. |
| Experiment Setup | Yes | The regularization parameter was set to λ = 10 6 following the suggestion of Shalev-Shwartz et al. [2011]. For LSQSGD, we used the UCI Year Prediction MSD dataset (463,715 data points, 90 features) and, following the suggestion of Nemirovski et al. [2009], set the step-size to α = n 1/2. |