Boosting with Abstention

Authors: Corinna Cortes, Giulia DeSalvo, Mehryar Mohri

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also report the results of several experiments suggesting that our algorithm provides a significant improvement in practice over two confidence-based algorithms.
Researcher Affiliation Collaboration Corinna Cortes Google Research New York, NY 10011 corinna@google.com Giulia De Salvo Courant Institute New York, NY 10012 desalvo@cims.nyu.edu Mehryar Mohri Courant Institute and Google New York, NY 10012 mohri@cims.nyu.edu
Pseudocode Yes Figure 3: Pseudocode of the BA algorithm for both the exponential loss with Φ1(u) = Φ2(u) = exp(u) as well as for the logistic loss with Φ1(u) = Φ2(u) = log2(1 + eu).
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes We tested the algorithms on six data sets from UCI s data repository, specifically australian, cod, skin, banknote, haberman, and pima. For more information about the data sets, see Appendix I. For each data set, we implemented the standard 5-fold cross-validation where we randomly divided the data into training, validation and test set with the ratio 3:1:1. [...] We have also successfully run BA on the CIFAR-10 data set (boat and horse images) which contains 10,000 instances and we believe that our algorithm can scale to much larger datasets.
Dataset Splits Yes For each data set, we implemented the standard 5-fold cross-validation where we randomly divided the data into training, validation and test set with the ratio 3:1:1.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions "implemented in CVX [8]" and cites "Scikit-learn" but does not provide specific version numbers for these or other ancillary software components.
Experiment Setup Yes For all three algorithms, the cost values ranged over c 2 {0.05, 0.1, . . . , 0.5} while threshold γ ranged over γ 2 {0.08, 0.16, . . . , 0.96}. For the BA algorithm, the β regularization parameter ranged over β 2 {0, 0.05, . . . , 0.95}. All experiments for BA were based on T = 200 boosting rounds.