Online Sampling and Decision Making with Low Entropy

Authors: Mohammad Taghi Hajiaghayi, Dariusz R. Kowalski, Piotr Krysta, Jan Olkowski

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We show that with a very small O(log log n) entropy an almost-optimal approximation of the value of k largest numbers can be selected, with only a polynomially small additive error, for k < log n/ log log n. Our solution works for exponentially larger range of parameter k compared to previously known algorithms (STOC 15 [22]). We also prove a corresponding lower bound on the entropy of optimal (and even close-to-optimal, with respect to competitive ratio) solutions for this problem of choosing k largest numbers, matching the entropy of our algorithm. No previous lower bound on entropy was known for this problem if k > 1.
Researcher Affiliation Academia 1Department of Computer Science, University of Maryland, College Park, Maryland, USA 2School of Computer and Cyber Sciences, Augusta University, Augusta, Georgia, USA 3Department of Computer Science, University of Liverpool, Liverpool, UK
Pseudocode Yes Algorithm 1: Find permutations distribution (for positive events). Algorithm 2: Multiple-choice secretary algorithm with adaptive thresholds and permutation distribution D {Πℓ, Lℓ}.
Open Source Code No The paper does not provide any statement or link indicating that the source code for their methodology is openly available.
Open Datasets No The paper is theoretical and does not conduct experiments on publicly available datasets. It refers to 'numbers chosen from an unknown distribution' as part of the problem definition for theoretical analysis.
Dataset Splits No The paper focuses on theoretical analysis and does not describe empirical experiments with data splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or hardware specifications used for computation.
Software Dependencies No The paper is theoretical and does not describe an implementation or specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe any experimental setup details, hyperparameters, or training configurations.