Online learning with kernel losses

Authors: Niladri Chatterji, Aldo Pacchiano, Peter Bartlett

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical All the proofs, technical details and experiments are relegated to the appendix.
Researcher Affiliation Academia 1University of California Berkeley. Correspondence to: Aldo Pacchiano <pacchiano@berkeley.edu>, Niladri S. Chatterji <chatterji@berkeley.edu>.
Pseudocode Yes Algorithm 1 Finite dimensional proxy construction, Algorithm 2 Bandit Information: Exponential Weights, Algorithm 3 Full Information: Exponential Weights, Algorithm 4 Full Information: Conditional Gradient.
Open Source Code No The paper does not provide any explicit statements about open-source code availability, nor does it include links to a code repository or mention code in supplementary materials.
Open Datasets No The paper is theoretical and does not present empirical studies or use specific datasets for experimental evaluation.
Dataset Splits No The paper is theoretical and does not present empirical studies with datasets, therefore no training/validation/test splits are discussed.
Hardware Specification No The paper is theoretical and does not present empirical experiments, therefore no hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not present empirical experiments, therefore no software dependencies with version numbers are specified.
Experiment Setup No The paper is theoretical and does not present empirical experiments, therefore no experimental setup details such as hyperparameters or training settings are provided.