Characterization of Overfitting in Robust Multiclass Classification
Authors: Jingyuan Xu, Weiwei Liu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we derive both upper and lower bounds of h U(k, n, m), and demonstrate that our upper bounds and lower bounds are matching within logarithmic factors when n and the distribution of test dataset features DX are fixed. Next, we give a brief overview of proof techniques used to obtain the main results. We first note that throughout this paper we use the notion of corrupted hypothesis [28], which transforms the formulation of robust accuracy to a non-robust one thus greatly simplifying the proofs. The definition of corrupted hypothesis is presented in the beginning of Section 3. We establish the upper bounds via minimum description length argument, following closely a proof of an analogous result by [5] for non-robust setting. ... To obtain the lower bounds, we propose computationally efficient algorithms for two regions of k respectively. The algorithms are modified from [6]... |
| Researcher Affiliation | Academia | Jingyuan Xu Weiwei Liu School of Computer Science, Wuhan University National Engineering Research Center for Multimedia Software, Wuhan University Institute of Artificial Intelligence, Wuhan University Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University {jingyuanxu777,liuweiwei863}@gmail.com |
| Pseudocode | Yes | Algorithm 1 Asmall (k = 1); Algorithm 2 Asmall (k > 1); Algorithm 3 Abig(C) |
| Open Source Code | No | The paper is a theoretical work focusing on deriving mathematical bounds and does not provide any statements regarding the release of open-source code for the described methodologies. |
| Open Datasets | No | The paper is theoretical and focuses on mathematical derivations. It does not mention or utilize specific datasets, public or otherwise, for training or evaluation. |
| Dataset Splits | No | The paper is a theoretical work and does not describe any experimental setup involving training, validation, or test dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe any experiments or computations that would require specific hardware specifications. |
| Software Dependencies | No | The paper is purely theoretical and does not mention any software dependencies or their specific version numbers. |
| Experiment Setup | No | The paper is theoretical and focuses on mathematical derivations; it does not include details on experimental setup, hyperparameters, or system-level training settings. |