A Trichotomy for Transductive Online Learning

Authors: Steve Hanneke, Shay Moran, Jonathan Shafer

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Qualitatively, we prove a trichotomy, stating that the minimal number of mistakes made by the learner as n grows can take only one of precisely three possible values: n, Θ (log(n)), or Θ(1). Furthermore, this behavior is determined by a combination of the VC dimension and the Littlestone dimension. Quantitatively, we show a variety of bounds relating the number of mistakes to well-known combinatorial dimensions. In particular, we improve the known lower bound on the constant in the Θ(1) case from Ω pΩ(log(d)) where d is the Littlestone dimension. Finally, we extend our results to cover multiclass classification and the agnostic setting.
Researcher Affiliation Academia Steve Hanneke Department of Computer Science Purdue University steve.hanneke@gmail.com Shay Moran Faculty of Mathematics, Faculty of Computer Science, and Faculty of Data and Decision Sciences Technion Israel Institute of Technology smoran@technion.ac.il Jonathan Shafer Computer Science Division UC Berkeley shaferjo@berkeley.edu
Pseudocode Yes Algorithm 1: An adversary that forces Ω(log(LD(H))) mistakes.
Open Source Code No The paper focuses on theoretical contributions (theorems, proofs, bounds) and does not describe a software methodology or provide any statements or links regarding the availability of source code for its contributions.
Open Datasets No The paper is theoretical and does not involve experiments with datasets. Therefore, it does not mention specific public datasets or their access information for training purposes.
Dataset Splits No The paper is theoretical and does not report on experiments with data, thus no train/validation/test splits are mentioned.
Hardware Specification No The paper is theoretical and does not report on computational experiments. Therefore, no hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not report on computational experiments that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not report on computational experiments. Therefore, no experimental setup details, such as hyperparameters or training configurations, are provided.