Learning to Optimize via Information-Directed Sampling

Authors: Daniel Russo, Benjamin Van Roy

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We benchmark the performance of IDS through simulations of the widely studied Bernoulli and linear bandit problems, for which UCB algorithms and Thompson sampling are known to be very effective. We find that even in these settings, IDS outperforms UCB algorithms, Thompson sampling, and knowledge gradient.
Researcher Affiliation Academia Daniel Russo Stanford University Stanford, CA 94305 djrusso@stanford.edu Benjamin Van Roy Stanford University Stanford, CA 94305 bvr@stanford.edu
Pseudocode Yes Algorithm 1, presented in the supplementary material, solves (6) by looping over all pairs of actions, and solving a one dimensional convex optimization problem.
Open Source Code No The paper does not provide an explicit statement about the release of source code or a link to a code repository for the described methodology.
Open Datasets No The paper describes how the data for the experiments (Beta-Bernoulli rewards and linear bandit problem parameters) are generated, but it does not use or provide access to a pre-existing public dataset.
Dataset Splits No The paper describes simulation experiments with a specified number of trials and time horizons, but it does not specify explicit dataset splits (e.g., train/validation/test percentages or counts) in the traditional machine learning sense.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments.
Software Dependencies No The paper does not mention any specific software dependencies or their version numbers.
Experiment Setup Yes Beta-Bernoulli experiment: '10 arms and a time horizon of 1000.' 'The θi are drawn independently from Beta(1, 1), which is the uniform distribution.' Asymptotic performance: 'problem with three actions and with θ = (.3, .2, .1).' Linear bandit problems: 'Each action a R5 is defined by a 5 dimensional feature vector.' 'θ N(0, 10I) is drawn from a multivariate Gaussian prior distribution, and ǫt N(0, 1) is independent Gaussian noise.' 'action set A contains 30 actions, each with features drawn uniformly at random from [1/5].'