Minimax Regret of Switching-Constrained Online Convex Optimization: No Phase Transition

Authors: Lin Chen, Qian Yu, Hannah Lawrence, Amin Karbasi

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we fully characterize the minimax regret of switching-constrained online convex optimization. Since it is a theoretical result in nature, the broader impact discussion is not applicable.
Researcher Affiliation Academia Lin Chen1,2 Qian Yu3 Hannah Lawrence4 Amin Karbasi1 1 Yale University 2 Simons Institute for the Theory of Computing 3 University of Southern California 4 Massachusetts Institute of Technology
Pseudocode No The paper describes algorithms conceptually, such as a 'mini-batching algorithm' and 'adversarial strategies,' but does not provide any formal pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository.
Open Datasets No The paper is theoretical and does not involve empirical evaluation on datasets. Therefore, no information about publicly available datasets is provided.
Dataset Splits No The paper is theoretical and does not involve empirical evaluation on datasets. Therefore, no information about training/validation/test splits is provided.
Hardware Specification No The paper is theoretical and does not describe any empirical experiments that would require hardware specifications. No mention of hardware is made.
Software Dependencies No The paper is theoretical and does not describe any empirical experiments or mention any software dependencies with specific version numbers.
Experiment Setup No The paper is theoretical and does not describe any empirical experimental setup, hyperparameters, or system-level training settings.