A Bayesian Framework for Online Classifier Ensemble

Authors: Qinxun Bai, Henry Lam, Stan Sclaroff

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In experiments with real-world datasets, our formulation often performs better than online boosting algorithms. (Abstract) and Section 5: Experiments
Researcher Affiliation Academia Qinxun Bai QINXUN@CS.BU.EDU Department of Computer Science, Boston University, Boston, MA 02215 USA Henry Lam KHLAM@BU.EDU Department of Mathematics and Statistics, Boston University, Boston, MA 02215 USA Stan Sclaroff SCLAROFF@CS.BU.EDU Department of Computer Science, Boston University, Boston, MA 02215 USA
Pseudocode Yes Algorithm 1 Bayesian Ensemble and Algorithm 2 Closed-form Bayesian Ensemble
Open Source Code No No explicit statement or link providing access to the paper's own source code is found.
Open Datasets Yes We report two sets of experiments on binary classification benchmark datasets1. ... 1http://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/
Dataset Splits Yes Each data set is split into training and testing sets for each random trial, where a training set contains no more than 10% of the total amount of data.
Hardware Specification No No specific hardware details (like CPU/GPU models, memory, or processing units) used for running experiments are mentioned in the paper.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) are mentioned.
Experiment Setup Yes In all experiments, we have set the hyperparameters of our method α = β = 1 and θ = 0.1.