Learning from a Sample in Online Algorithms

Authors: C.J. Argue, Alan Frieze, Anupam Gupta, Christopher Seiler

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We consider three central problems in optimization: the restricted assignment load-balancing problem, the Steiner tree network design problem, and facility location clustering. We consider the online setting, where the input arrives over time, and irrevocable decisions must be made without knowledge of the future. For all these problems, any online algorithm must incur a cost that is approximately log |I| times the optimal cost in the worst-case, where |I| is the length of the input. But can we go beyond the worst-case? In this work we give algorithms that perform substantially better when a p-fraction of the input is given as a sample: the algorithm use this sample to learn a good strategy to use for the rest of the input.
Researcher Affiliation Academia Departments of Computer Science and Mathematics Carnegie Mellon University Pittsburgh, PA 15213
Pseudocode Yes Algorithm 1: Robust Algorithm to Compute Weights
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets No The paper describes abstract instances (I, S, R) and a 'training phase' in the context of sampling, but does not refer to specific, publicly available datasets for empirical training.
Dataset Splits No The paper does not describe any validation dataset splits or mention the use of a validation set.
Hardware Specification No The paper is theoretical and does not describe any hardware used for experiments.
Software Dependencies No The paper is theoretical and does not specify any software dependencies with version numbers.
Experiment Setup No The paper focuses on theoretical algorithm design and analysis, and thus does not include details on experimental setup such as hyperparameters or training configurations.