ActiveHedge: Hedge meets Active Learning

Authors: Bhuvesh Kumar, Jacob D Abernethy, Venkatesh Saligrama

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide preliminary experiments to compare Active Hedge (Algorithm 2), with standard Hedge (Algorithm 1) and the label efficient algorithm given by Cesa-Bianchi et al. (2005).
Researcher Affiliation Academia 1Georgia Institute of Technology 2Department of Electrical and Computer Engineering, Boston University.
Pseudocode Yes Algorithm 1: Hedge
Open Source Code No The paper does not provide explicit statements or links for open-source code for the methodology described.
Open Datasets No The paper describes generating synthetic data for experiments ('We uniformly N sample linear classifiers from a unit sphere centred at origin. We then sample M points from a unit sphere and classify each point using the N experts to create the expert prediction matrix X.') rather than using a publicly available dataset with concrete access information.
Dataset Splits No The paper does not provide specific dataset split information for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup Yes All experiments are repeated 100 times, with M = 10000 and N = 100 and d = 10. We use upper bounds for ζ and ϵ and other parameters are set optimally.