Norm-Based Generalisation Bounds for Deep Multi-Class Convolutional Neural Networks

Authors: Antoine Ledent, Waleed Mustafa, Yunwen Lei, Marius Kloft8279-8287

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We show generalisation error bounds for deep learning with two main improvements over the state of the art. (1) Our bounds have no explicit dependence on the number of classes except for logarithmic factors. (2) We adapt the classic Rademacher analysis of DNNs to incorporate weight sharing a task of fundamental theoretical importance which was previously attempted only under very restrictive assumptions.
Researcher Affiliation Academia Antoine Ledent 1, Waleed Mustafa 1, Yunwen Lei 1,2 and Marius Kloft 1 1Department of Computer Science, TU Kaiserslautern, 67653 Kaiserslautern, Germany. 2School of Computer Science, University of Birmingham, Birmingham B15 2TT, United Kingdom
Pseudocode No The paper presents mathematical theorems and propositions but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not mention making any source code available for the described methodology.
Open Datasets No The paper is purely theoretical and does not describe or use any specific datasets for training or evaluation. It refers to "training examples" in a general theoretical context, but not a concrete dataset.
Dataset Splits No The paper is purely theoretical and does not describe any experimental setup or dataset splits (training, validation, test) for reproducibility.
Hardware Specification No The paper is purely theoretical and does not describe any experimental setup, thus no hardware specifications are mentioned.
Software Dependencies No The paper is purely theoretical and does not describe any experimental setup, thus no software dependencies with version numbers are mentioned.
Experiment Setup No The paper is purely theoretical and does not describe any experimental setup, hyperparameters, or training settings.