On Robust Multiclass Learnability
Authors: Jingyuan Xu, Weiwei Liu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This work analyzes the robust learning problem in the multiclass setting. Under the framework of Probably Approximately Correct (PAC) learning, we first show that the graph dimension and the Natarajan dimension, which characterize the standard multiclass learnability, are no longer applicable in robust learning problem. We then generalize these notions to the robust learning setting, denoted as the adversarial graph dimension (AG-dimension) and the adversarial Natarajan dimension (AN-dimension). Upper and lower bounds of the sample complexity of robust multiclass learning are rigorously derived based on the AG-dimension and AN-dimension, respectively. Moreover, we calculate the AG-dimension and AN-dimension of the class of linear multiclass predictors, and show that the graph (Natarajan) dimension is of the same order as the AG(AN)-dimension. Finally, we prove that the AGdimension and AN-dimension are not equivalent. |
| Researcher Affiliation | Academia | Jingyuan Xu School of Computer Science Wuhan University jingyuanxu777@gmail.com Weiwei Liu School of Computer Science Wuhan University liuweiwei863@gmail.com |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. It focuses on theoretical definitions and proofs. |
| Open Source Code | No | The paper is theoretical and does not mention releasing any source code. The 'ACL Submission Checklist' explicitly marks 'Did you include the code, data, and instructions needed to reproduce the main experimental results' as N/A. |
| Open Datasets | No | The paper is theoretical and does not conduct empirical studies with datasets. Therefore, it does not specify a publicly available training dataset. The term 'training samples' appears in theoretical definitions but not in the context of an actual dataset used for experiments. |
| Dataset Splits | No | The paper is theoretical and does not conduct empirical studies with datasets. Therefore, it does not specify training, validation, or test splits. The 'ACL Submission Checklist' explicitly marks 'Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)' as N/A. |
| Hardware Specification | No | The paper is theoretical and does not describe any experiments that would require specific hardware. Therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe any computational experiments that would require specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not conduct any empirical experiments. Therefore, it does not provide details on an experimental setup, hyperparameters, or system-level training settings. |