Deep Boosting

Authors: Corinna Cortes, Mehryar Mohri, Umar Syed

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We report the results of several experiments showing that its performance compares favorably to that of Ada Boost and Logistic Regression and their L1-regularized variants. and 4. Experiments
Researcher Affiliation Collaboration Corinna Cortes CORINNA@GOOGLE.COM Google Research, 111 8th Avenue, New York, NY 10011 Mehryar Mohri MOHRI@CIMS.NYU.EDU Courant Institute and Google Research, 251 Mercer Street, New York, NY 10012 Umar Syed USYED@GOOGLE.COM Google Research, 111 8th Avenue, New York, NY 10011
Pseudocode Yes Figure 2. Pseudocode of the Deep Boost algorithm for both the exponential loss and the logistic loss.
Open Source Code No No explicit statement about open-source code release or a link to a repository is found.
Open Datasets Yes We tested Deep Boost on the same UCI datasets used by these authors, http:// archive.ics.uci.edu/ml/datasets.html, specifically breastcancer, ionosphere, german(numeric) and diabetes. We also experimented with two optical character recognition datasets used by Reyzin & Schapire (2006), ocr17 and ocr49, which contain the handwritten digits 1 and 7, and 4 and 9 (respectively). Finally, because these OCR datasets are fairly small, we also constructed the analogous datasets from all of MNIST, http://yann. lecun.com/exdb/mnist/, which we call ocr17-mnist and ocr49-mnist.
Dataset Splits Yes Each dataset was randomly partitioned into 10 folds, and each algorithm was run 10 times, with a different assignment of folds to the training set, validation set and test set for each run. Specifically, for each run i 2 {0, . . . , 9}, fold i was used for testing, fold i + 1 (mod 10) was used for validation, and the remaining folds were used for training.
Hardware Specification No No specific hardware details (like GPU/CPU models, memory, or cloud instance types) are provided.
Software Dependencies No No specific software dependencies with version numbers are mentioned (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup Yes In all of our experiments, the number of iterations was set to 100. and For Ada Boost-L1, we optimized over β 2 {2 i : i = 6, . . . , 0} and for Deep Boost, we optimized over β in the same range and λ 2 {0.0001, 0.005, 0.01, 0.05, 0.1, 0.5}. and Specifically, for Ada Boost we optimized over K 2 {1, . . . , 6}, for Ada Boost-L1 we optimized over those same values for K and β 2 {10 i : i = 3, . . . , 7}, and for Deep Boost we optimized over those same values for K, β and λ 2 {10 i : i = 3, . . . , 7}.