Improved Sample Complexity for Multiclass PAC Learning
Authors: Steve Hanneke, Shay Moran, Qian Zhang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We prove that a positive answer to either of the two questions would completely resolve the optimal sample complexity up to log factors of the DS dimension. In this paper, we step forward towards improved sample complexity and error rate in multiclass learning. Specifically, for a concept class H Ď YX with dimp Hq d, we prove an Ωppd logp1{δqq{εq lower bound on its sample complexity and construct a multiclass learner whose worst case error rate is Oppd3{2 logpdq logpnq logp1{δqq{nq with probability at least 1 δ |
| Researcher Affiliation | Collaboration | Steve Hanneke Purdue University steve.hanneke@gmail.com Shay Moran Technion and Google Research smoran@technion.ac.il Qian Zhang Purdue University zhan3761@purdue.edu |
| Pseudocode | Yes | Algorithm 1: Multiclass learner Ared using a list learner Alist |
| Open Source Code | No | The paper is purely theoretical and does not mention releasing any source code or providing links to repositories. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments with specific datasets. It discusses 'distribution P over X ˆ Y' and 'concept class H Ď YX' abstractly rather than concrete datasets. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments, therefore, there is no mention of training, validation, or test dataset splits. |
| Hardware Specification | No | The paper is purely theoretical and does not conduct any experiments, thus there is no mention of specific hardware specifications. |
| Software Dependencies | No | The paper is purely theoretical and does not describe any experiments, therefore, no software dependencies with version numbers are mentioned. |
| Experiment Setup | No | The paper is purely theoretical and does not conduct any experiments, thus no experimental setup details such as hyperparameters or training settings are provided. |