Hardness of Online Sleeping Combinatorial Optimization Problems

Authors: Satyen Kale, Chansoo Lee, David Pal

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we resolve this open problem and prove that ONLINE SABOTAGED SHORTEST PATH problem is computationally hard. Specifically, we show that a polynomial-time low-regret algorithm for this problem implies a polynomial-time algorithm for PAC learning DNF expressions, which is a long-standing open problem. The best known algorithm for PAC learning DNF expressions on n variables has time complexity 2 e O(n1/3) [Klivans and Servedio, 2001]. Our reduction framework (Section 4) in fact shows a general result that any online sleeping combinatorial optimization problem with two simple structural properties is as hard as PAC learning DNF expressions.
Researcher Affiliation Collaboration Satyen Kale Yahoo Research satyen@satyenkale.com Chansoo Lee Univ. of Michigan, Ann Arbor chansool@umich.edu D avid P al Yahoo Research dpal@yahoo-inc.com Current affiliation: Google Research. This work was done while the authors were at Yahoo Research.
Pseudocode Yes Algorithm 1 ALGORITHM ALGDISJ FOR LEARNING DISJUNCTIONS
Open Source Code No The paper is theoretical and does not mention releasing any source code for its proposed framework or analysis.
Open Datasets No This is a theoretical paper focused on hardness proofs and reductions, not empirical experiments. Therefore, no datasets are used or specified for training.
Dataset Splits No This is a theoretical paper focused on hardness proofs and reductions, not empirical experiments. Therefore, no training/validation/test splits are relevant or provided.
Hardware Specification No This is a theoretical paper and does not describe any computational experiments or hardware used.
Software Dependencies No This is a theoretical paper focused on hardness proofs and reductions, not empirical experiments or software implementation details that would require specific versioned dependencies.
Experiment Setup No This is a theoretical paper and does not describe any empirical experimental setup, hyperparameters, or training configurations.