An urn model for majority voting in classification ensembles

Authors: Victor Soto, Alberto Suárez, Gonzalo Martinez-Muñoz

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we present the results of an extensive empirical evaluation of the dynamical ensemble pruning method described in the previous section. The experiments are performed in a series of benchmark classification problems from the UCI Repository [1] and synthetic data [4] using Random Forests [5].
Researcher Affiliation Academia Victor Soto Computer Science Department Columbia University New York, NY, USA vsoto@cs.columbia.edu Alberto Suárez and Gonzalo Martínez-Muñoz Computer Science Department Universidad Autónoma de Madrid Madrid, Spain {gonzalo.martinez,alberto.suarez}@uam.es
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes The code is available at: https://github.com/vsoto/majority-ibp-prior.
Open Datasets Yes The experiments are performed in a series of benchmark classification problems from the UCI Repository [1] and synthetic data [4] using Random Forests [5].
Dataset Splits Yes for each problem, 100 partitions are created by 10 10-fold cross-validation for real datasets and by random sampling in the synthetic datasets.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running experiments.
Software Dependencies No The paper mentions using Random Forests but does not specify any software dependencies with version numbers (e.g., Python version, specific library versions).
Experiment Setup Yes (i) a Random Forest ensemble of size T = 101 is built; (iii) The SIBA algorithm [14] is applied to dynamically select the number of classifiers that are needed for each instance in the test set to achieve a level of confidence in the prediction above α = 0.99.