No Internal Regret with Non-convex Loss Functions

Authors: Dravyansh Sharma

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In the full information setting, for the case of L-Lipschitz loss functions, we obtain an algorithm which achieves regret O( d T log RLT). Further, for the case of one-dimensional piecewise constant functions, we obtain an algorithm which achieves regret O( T log KT) under a mild smoothness assumption, where K is a bound on the number of pieces in each loss function. We show that our bounds are near-optimal by providing a ( T) lower bound on the internal regret for our loss functions.
Researcher Affiliation Academia Dravyansh Sharma Carnegie Mellon University dravyans@cs.cmu.edu
Pseudocode Yes Algorithm 1: DEPOM(η, ϵ) Algorithm 2: CEPOM(η) Algorithm 3: INTERNAL EXP3(ηt, γt)
Open Source Code No The paper does not include any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper is theoretical and focuses on algorithm design and regret bounds. It does not use specific datasets for empirical training or evaluation, therefore no concrete access information for a dataset is provided.
Dataset Splits No The paper is theoretical and does not conduct empirical experiments requiring dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any specific hardware used for experiments.
Software Dependencies No The paper is theoretical and focuses on algorithms and mathematical proofs. It does not mention any specific software dependencies or their version numbers used for implementation or experiments.
Experiment Setup No The paper is theoretical and does not detail an empirical experimental setup with hyperparameters or training configurations for a computational model.