Delegated Online Search

Authors: Pirmin Braun, Niklas Hahn, Martin Hoefer, Conrad Schecker

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We show that P can obtain a Θ(1/n)-approximation and provide more fine-grained bounds independent of n based on two parameters. If the ratio of maximum and minimum utility for A is bounded by a factor α, we obtain an Ω(log log α/ log α)-approximation algorithm, and we show that this is best possible. If P cannot distinguish options with the same value for herself, we show that ratios polynomial in 1/α cannot be avoided. If the utilities of P and A for each option are related by a factor β, we obtain an Ω(1/ log β)-approximation, and O(log log β/ log β) is best possible.
Researcher Affiliation Academia 1 Goethe University Frankfurt, Germany 2 Sorbonne Universit e, CNRS, LIP6 UMR 7606, Paris, France pirminbraun16@gmail.com, niklas.hahn@lip6.fr, {mhoefer,schecker}@em.uni-frankfurt.de
Pseudocode No Additional descriptions, pseudocode for our algorithms as well as all missing proofs can be found in the full version of this paper [Braun et al., 2022].
Open Source Code No The paper does not mention providing open-source code for the described methodology.
Open Datasets No The paper is theoretical and does not use or refer to any publicly available or open datasets for empirical training. Table 1 presents an illustrative example, not a dataset used for experiments.
Dataset Splits No The paper is theoretical and does not involve empirical training or validation on datasets with specified splits.
Hardware Specification No The paper is theoretical and does not mention any hardware used for experiments.
Software Dependencies No The paper is theoretical and does not mention any specific software dependencies with version numbers for implementation or experimentation.
Experiment Setup No The paper is theoretical and describes algorithms and proofs, not empirical experiments. Therefore, no experimental setup details like hyperparameters or training configurations are provided.