Fast Rates for Bandit PAC Multiclass Classification
Authors: Liad Erez, Alon Peled-Cohen, Tomer Koren, Yishay Mansour, Shay Moran
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our main contribution is in designing a novel learning algorithm for the agnostic (e, δ)-PAC version of the problem, with sample complexity of O (poly(K) + 1/e2) log(|H|/δ) for any finite hypothesis class H. |
| Researcher Affiliation | Collaboration | Liad Erez Tel-Aviv University liaderez@mail.tau.ac.il Alon Cohen Tel-Aviv University Google Research alonco@tauex.tau.ac.il Tomer Koren Tel-Aviv University Google Research tkoren@tauex.tau.ac.il Yishay Mansour Tel-Aviv University Google Research mansour.yishay@gmail.com Shay Moran Technion Google Research shaymoran1@gmail.com |
| Pseudocode | Yes | Algorithm 1 Bandit PAC Multiclass Classification via Log Barrier Stochastic Optimization, Algorithm 2 Stochastic Frank-Wolfe with SPIDER gradient estimates |
| Open Source Code | No | The paper does not provide an explicit statement about releasing code or a link to a code repository. The NeurIPS checklist indicates it is a theoretical paper. |
| Open Datasets | No | The paper is theoretical and does not use external public datasets for training. It describes an internal process of constructing a dataset 'S' within the algorithm, but this is not a publicly available dataset in the typical sense. |
| Dataset Splits | No | The paper is theoretical and does not describe training, validation, or test dataset splits for empirical experiments. |
| Hardware Specification | No | The paper is theoretical and does not describe specific hardware used to run experiments. |
| Software Dependencies | No | The paper is theoretical and does not specify software dependencies with version numbers for reproducibility. |
| Experiment Setup | No | The paper is theoretical and does not provide details about an empirical experimental setup, such as hyperparameters or training configurations. |