Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
A Bayesian Framework for Online Classifier Ensemble
Authors: Qinxun Bai, Henry Lam, Stan Sclaroff
ICML 2014 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In experiments with real-world datasets, our formulation often performs better than online boosting algorithms. (Abstract) and Section 5: Experiments |
| Researcher Affiliation | Academia | Qinxun Bai EMAIL Department of Computer Science, Boston University, Boston, MA 02215 USA Henry Lam EMAIL Department of Mathematics and Statistics, Boston University, Boston, MA 02215 USA Stan Sclaroff EMAIL Department of Computer Science, Boston University, Boston, MA 02215 USA |
| Pseudocode | Yes | Algorithm 1 Bayesian Ensemble and Algorithm 2 Closed-form Bayesian Ensemble |
| Open Source Code | No | No explicit statement or link providing access to the paper's own source code is found. |
| Open Datasets | Yes | We report two sets of experiments on binary classification benchmark datasets1. ... 1http://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/ |
| Dataset Splits | Yes | Each data set is split into training and testing sets for each random trial, where a training set contains no more than 10% of the total amount of data. |
| Hardware Specification | No | No specific hardware details (like CPU/GPU models, memory, or processing units) used for running experiments are mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) are mentioned. |
| Experiment Setup | Yes | In all experiments, we have set the hyperparameters of our method α = β = 1 and θ = 0.1. |