Learning Online Algorithms with Distributional Advice

Authors: Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Ali Vakilian, Nikos Zarifis

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the problem of designing online algorithms given advice about the input. While prior work had focused on deterministic advice, we only assume distributional access to the instances of interest, and the goal is to learn a competitive algorithm given access to i.i.d. samples. We aim to be competitive against an adversary with prior knowledge of the distribution, while also performing well against worst-case inputs. We focus on the classical online problems of skirental and prophet-inequalities, and provide sample complexity bounds for the underlying learning tasks. First, we point out that for general distributions it is information-theoretically impossible to beat the worst-case competitive-ratio with any finite sample size. As our main contribution, we establish strong positive results for wellbehaved distributions. Specifically, for the broad class of log-concave distributions, we show that poly(1/ϵ) samples suffice to obtain (1 + ϵ)competitive ratio. Finally, we show that this sample upper bound is close to best possible, even for very simple classes of distributions.
Researcher Affiliation Academia 1Department of Computer Sciences, University of Wisconsin, Madison, Wisconsin, USA 2Toyota Technological Institute at Chicago (TTIC), Chicago, Illinois, USA. This work was done when the author was at University of Wisconsin Madison. Correspondence to: Ali Vakilian <vakilian@ttic.edu>.
Pseudocode Yes Algorithm 1 Algorithm for ε-Additive Approximation, Algorithm 2 SKI-RENTAL: consistent and robust algorithm for ski-rental adapted from (Mahdian et al., 2012)., Algorithm 3 Threshold-Algorithm
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets No The paper focuses on theoretical analysis of algorithms under different distributions and does not use or provide access to specific publicly available datasets.
Dataset Splits No The paper is theoretical and does not describe training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not mention any specific hardware used for experiments.
Software Dependencies No The paper is theoretical and does not list any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with hyperparameters or training settings.