Precision-based Boosting

Authors: Mohammad Hossein Nikravan, Marjan Movahedan, Sandra Zilles9153-9160

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An empirical study on Ada Boost and one of its multi-class versions, SAMME, demonstrates the superiority of our method on datasets with more than 1,000 instances as well as on datasets with more than three classes.
Researcher Affiliation Academia Mohammad Hossein Nikravan, Marjan Movahedan, Sandra Zilles Department of Computer Science, University of Regina, Regina, SK, Canada nikravam@uregina.ca, marjan.movahedan@gmail.com, zilles@cs.uregina.ca
Pseudocode Yes Algorithm 1: Ada Boost Scheme (Freund and Schapire 1997), Algorithm 2: Pr Ada Boost
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes We evaluated (Pr)Ada Boost on 23 binary UCI datasets (Lichman 2013) and (Pr)SAMME on 18 multi-class UCI datasets
Dataset Splits Yes We performed 10-fold cross validation on each dataset (except isolet , which has designated training and test portions), comparing two algorithms always on the same folds.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions "decision stumps trained in Matlab as base classifiers" but does not provide specific version numbers for Matlab or any other software dependencies.
Experiment Setup Yes We ran Pr Ada Boost and Ada Boost for T iterations, trying T = 30, 50, and 100 (without attempting to tune T.)... using decision stumps trained in Matlab as base classifiers.