Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

On the optimality of the Hedge algorithm in the stochastic regime

Authors: Jaouad Mourtada, Stéphane Gaïffas

JMLR 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we illustrate our theoretical results by numerical experiments that compare the behavior of various Hedge algorithms in the stochastic regime. We report in Figure 1 the cumulative regrets of the considered algorithms in four examples.
Researcher Affiliation Academia Jaouad Mourtada EMAIL Centre de Mathematiques Appliquees Ecole Polytechnique Palaiseau, FranceStephane Gaiffas EMAIL Laboratoire de Probabilites, Statistique et Modelisation Universite Paris Diderot Paris, France
Pseudocode No The paper describes algorithms such as Hedge, Decreasing Hedge, Constant Hedge, and Hedge with doubling trick using mathematical formulations and prose, but it does not contain a dedicated pseudocode block or algorithm listing.
Open Source Code No The paper does not provide any explicit statement about releasing source code, nor does it include links to a code repository in the main text or supplementary materials. While it references "Koolen (2018)" which includes a blog link, this refers to another author's work and not the current paper's implementation.
Open Datasets No The paper describes generating synthetic data for its experiments: "(a) Stochastic instance with a gap. This is the standard instance considered in this paper. The losses are drawn independently from Bernoulli distributions (one of parameter 0.3, 2 of parameter 0.4 and 7 of parameter 0.5, so that M = 10 and = 0.1)." It does not refer to any pre-existing public datasets with access information.
Dataset Splits No The paper describes generating synthetic stochastic instances with specific parameters for its numerical experiments (e.g., "losses are drawn independently from Bernoulli distributions"). It does not involve partitioning a pre-existing dataset into training, validation, or test splits, as is common with empirical studies using fixed datasets.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the numerical experiments or simulations.
Software Dependencies No The paper describes the algorithms and their parameters for the numerical illustrations but does not specify any software dependencies or versions (e.g., programming languages, libraries, frameworks) used for implementation.
Experiment Setup Yes We consider the following algorithms: hedge is Decreasing Hedge with the default learning rates ηt = 2 p log(M)/t, hedge constant is Constant Hedge with constant learning rate ηt = p 8 log(M)/T, hedge doubling is Hedge with doubling trick with c0 = 8, adahedge is the Ada Hedge algorithm from de Rooij et al. (2014), which is a variant of the Hedge algorithm with a data-dependent tuning of the learning rate ηt (based on ℓ1, . . . , ℓt 1).