Mixing Predictions for Online Metric Algorithms

Authors: Antonios Antoniadis, Christian Coester, Marek Elias, Adam Polak, Bertrand Simon

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We design algorithms that combine predictions and are competitive against such dynamic combinations for a wide class of online problems, namely, metrical task systems. Against the best (in hindsight) unconstrained combination of ℓpredictors, we obtain a competitive ratio of O(ℓ2), and show that this is best possible. However, for a benchmark with slightly constrained number of switches between different predictors, we can get a (1 + ϵ)competitive algorithm. Moreover, our algorithms can be adapted to access predictors in a banditlike fashion, querying only one predictor at a time. An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the k-server problem. The paper focuses on theoretical analysis, algorithm design, competitive ratios, and proofs (e.g., Theorem 1.1 to 1.8), without empirical studies, datasets, or performance metrics from experiments.
Researcher Affiliation Academia 1University of Twente, The Netherlands 2University of Oxford, Oxford, United Kingdom 3Bocconi University, Milan, Italy 4Max Planck Institute for Informatics, Saarbr ucken, Germany 5CNRS / IN2P3 Computing Center, Villeurbanne, France.
Pseudocode Yes Algorithm 1: COMBINE A(P1, . . . , Pℓ) and Algorithm 2: BANDITCOMBINE (P1, . . . , Pℓ)
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper describes theoretical algorithms and proofs, not empirical experiments with datasets. Therefore, no information about publicly available training datasets is provided.
Dataset Splits No The paper describes theoretical algorithms and proofs, not empirical experiments with datasets. Therefore, no information about training/validation/test splits is provided.
Hardware Specification No The paper is theoretical and focuses on algorithm design and proofs. It does not mention any specific hardware used for running experiments.
Software Dependencies No The paper is theoretical and focuses on algorithm design and proofs. It does not list any specific software dependencies with version numbers.
Experiment Setup No The paper focuses on theoretical algorithm design and analysis, competitive ratios, and proofs. It does not describe any experimental setup details such as hyperparameters or system-level training settings.