Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Boosting Algorithms for Detector Cascade Learning

Authors: Mohammad Saberian, Nuno Vasconcelos

JMLR 2014 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on face and pedestrian detection show that the resulting cascades outperform current state-of-the-art methods in both detection accuracy and speed. In this section, we report on several experiments conducted to evaluate FCBoost.
Researcher Affiliation Academia Mohammad Saberian EMAIL Nuno Vasconcelos EMAIL Statistical Visual Computing Laboratory, University of California, San Diego La Jolla, CA 92039, USA
Pseudocode Yes Algorithm 1 adaboost, Algorithm 2 Last-stage cascade, Algorithm 3 multiplicative cascade, Algorithm 4 Best Stage Update, Algorithm 5 FCBoost
Open Source Code No The paper does not provide an explicit statement about releasing open-source code or a link to a code repository. It describes the algorithms but offers no direct access to an implementation.
Open Datasets Yes pedestrian detection relied on a training set of 2,347 positive and 2,000 negatives examples, of size 72x30, from the Caltech Pedestrian data set (Doll ar et al., 2012). All detectors were evaluated on this data set [MIT-CMU test set].
Dataset Splits Yes The training set for face detection contained 4,500 faces (along with their flipped replicas) and 9,000 negative examples... The test set consisted of 832 faces (along with their flipped replicas) and 1,664 negatives. ...pedestrian detection relied on a training set of 2,347 positive and 2,000 negatives examples... from the Caltech Pedestrian data set (Doll ar et al., 2012).
Hardware Specification No The paper mentions 'low-complexity processors, such as digital cameras or cell phones' in the context of application targets, and discusses processing times as a metric, but it does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. It mentions techniques like Haar wavelets and decision stumps, but not software implementations with versions.
Experiment Setup Yes In our implementation we always use µ = 5. All detectors were trained for 50 iterations. The unit computational cost was set to the cost of evaluating a new Haar feature. This resulted in a cost of 1/5 units for feature recycling, i.e., λ = 1/5 in (47). ...In all cases, the target detection rate was set to DT = 95%. ...For FCBoost we used a last-stage cascade... We did not attempt to optimize η, simply using η = 0.02. The cost factor C was initialized with C = 0.99.