A Drifting-Games Analysis for Online Learning and Applications to Boosting

Authors: Haipeng Luo, Robert E. Schapire

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we translate our new Hedge algorithm into a new adaptive boosting algorithm that is computationally faster as shown in experiments, since it ignores a large number of examples on each round.
Researcher Affiliation Collaboration Haipeng Luo Department of Computer Science Princeton University Princeton, NJ 08540 haipengl@cs.princeton.edu Robert E. Schapire Department of Computer Science Princeton University Princeton, NJ 08540 schapire@cs.princeton.edu R. Schapire is currently at Microsoft Research in New York City.
Pseudocode Yes Input: A Hedge Algorithm H for t = 1 to T do Query H: pt = H( 1:t 1). Set: DR(z1:t 1) = pt. Receive movements zt from the adversary. Set: t,i = zt,i minj zt,j, 8i. Algorithm 1: Conversion of a Hedge Algorithm H to a DGv1 Algorithm DR
Open Source Code No The paper does not contain any explicit statement about releasing its source code or provide a link to a code repository for the methodology described.
Open Datasets No The paper mentions experiments using 'Real.All' but does not provide concrete access information (link, DOI, repository name, formal citation with authors/year) for a publicly available or open dataset.
Dataset Splits No The paper discusses training and test error but does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, and test sets.
Hardware Specification No The paper conducts experiments but does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, used to replicate the experiments.
Experiment Setup No The paper mentions running experiments and compares algorithms like Ada Boost, NH-Boost, and NH-Boost.DT, but it does not provide specific details such as hyperparameter values, optimization settings, or other system-level training configurations needed to reproduce the experimental setup.