Parsimonious Learning-Augmented Approximations for Dense Instances of $\mathcalNP$-hard Problems
Authors: Evripidis Bampis, Bruno Escoffier, Michalis Xefteris
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper we extend and speed up this scheme using a logarithmic number of one-bit predictions. We propose a learning augmented framework which aims at finding fast algorithms which guarantees approximation consistency, smoothness and robustness with respect to the prediction error. We provide such algorithms, which moreover use predictions parsimoniously, for dense instances of various optimization problems. |
| Researcher Affiliation | Academia | 1Sorbonne Universit e, CNRS, LIP6, F-75005 Paris, France 2Institut Universitaire de France, Paris, France. |
| Pseudocode | Yes | Algorithm 1 EVALUATE(p, S, { ˆai : i S}) [...] Algorithm 2 LINEARIZE L p(x) U, S, { ˆai : i S}, ϵ |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that source code for the described methodology is provided. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments on datasets, therefore there is no mention of dataset availability. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments, therefore no training, validation, or test dataset splits are described. |
| Hardware Specification | No | The paper is theoretical and does not describe any experimental setup or specific hardware used. |
| Software Dependencies | No | The paper is theoretical and does not describe any specific software dependencies or version numbers for its implementation. |
| Experiment Setup | No | The paper is theoretical and does not describe any experimental setup details such as hyperparameters or training configurations. |