Sorting and Hypergraph Orientation under Uncertainty with Predictions

Authors: Thomas Erlebach, Murilo de Lima, Nicole Megow, Jens Schlöter

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study learning-augmented algorithms for sorting and hypergraph orientation under uncertainty, assuming access to untrusted predictions for the uncertain values. Our algorithms provide improved performance guarantees for accurate predictions while maintaining worst-case guarantees that are best possible without predictions. For sorting, our algorithm uses the optimal number of queries for accurate predictions and at most twice the optimal number for arbitrarily wrong predictions. For hypergraph orientation, for any γ 2, we give an algorithm that uses at most 1 + 1/γ times the optimal number of queries for accurate predictions and at most γ times the optimal number for arbitrarily wrong predictions. These tradeoffs are the best possible. We also consider different error metrics and show that the performance of our algorithms degrades smoothly with the prediction error in all the cases where this is possible.
Researcher Affiliation Academia 1Department of Computer Science, Durham University, United Kingdom 2School of Informatics, University of Leicester, United Kingdom 3Faculty of Mathematics and Computer Science, University of Bremen, Germany
Pseudocode Yes Algorithm 1: Algorithm for hypergraph orientation under uncertainty w.r.t. error measure k M and Algorithm 2: A nicely degrading algorithm for sorting with uncertainty and predictions
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets No The paper is theoretical and focuses on algorithm design and analysis; it does not use or refer to any specific public datasets for training, evaluation, or otherwise.
Dataset Splits No The paper is theoretical and does not describe experiments with dataset splits. It does not mention any training, validation, or test splits.
Hardware Specification No The paper is theoretical and focuses on algorithm design and proofs. It does not mention any specific hardware used for running experiments or any computational resources.
Software Dependencies No The paper is theoretical and describes algorithms. It does not list any specific software dependencies or version numbers (e.g., Python, PyTorch, CPLEX) that would be needed for replication.
Experiment Setup No The paper is theoretical and does not describe empirical experiments. Therefore, it does not provide details on experimental setup, hyperparameters, or training configurations.