Multi-Class Deep Boosting
Authors: Vitaly Kuznetsov, Mehryar Mohri, Umar Syed
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 4, we report the results of experiments demonstrating that multi-class Deep Boost outperforms Ada Boost.MR and multinomial (additive) logistic regression, as well as their L1-norm regularized variants, on several datasets. |
| Researcher Affiliation | Collaboration | Vitaly Kuznetsov Courant Institute 251 Mercer Street New York, NY 10012 vitaly@cims.nyu.edu Mehryar Mohri Courant Institute & Google Research 251 Mercer Street New York, NY 10012 mohri@cims.nyu.edu Umar Syed Google Research 76 Ninth Avenue New York, NY 10011 usyed@google.com |
| Pseudocode | Yes | Figure 1: Pseudocode of the MDeep Boost Sum algorithm for both the exponential loss and the logistic loss. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | In our experiments, we used 8 UCI data sets: abalone, handwritten, letters, pageblocks, pendigits, satimage, statlog and yeast see more details on these datasets in Table 4, Appendix L. |
| Dataset Splits | Yes | To set these parameters, we used the following parameter optimization procedure: we randomly partitioned each dataset into 4 folds and, for each tuple (λ, β, K) in the set of possible parameters (described below), we ran MDeep Boost Sum, with a different assignment of folds to the training set, validation set and test set for each run. Specifically, for each run i {0, 1, 2, 3}, fold i was used for testing, fold i + 1 (mod 4) was used for validation, and the remaining folds were used for training. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies, libraries, or programming languages used in the experiments. |
| Experiment Setup | Yes | For each dataset, the set of possible values for λ and β was initialized to {10 5, 10 6, . . . , 10 10}, and to {1, 2, 3, 4, 5} for the maximum tree depth K. |