Bounds on the Approximation Power of Feedforward Neural Networks

Authors: Mohammad Mehrabi, Aslan Tchamkerten, MANSOOR YOUSEFI

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical First, lower bounds on the size of a network are established in terms of the approximation error and network depth and width. These bounds improve upon stateof-the-art bounds for certain classes of functions, such as strongly convex functions. Second, an upper bound is established on the difference of two neural networks with identical weights but different activation functions. In this paper we consider general feedforward neural networks with piecewise linear activation functions and establish bounds on the size of the network in terms of the approximation error, the depth d, the width, and the dimension of the input space to approximate a given function.
Researcher Affiliation Academia 1Department of Electrical Engineering, Sharif University of Technology, Iran 2Department of Communications and Electronics, Telecom Paris Tech, France.
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not mention providing any open-source code for the described theoretical work.
Open Datasets No This paper is theoretical and does not involve training models on specific datasets.
Dataset Splits No The paper is theoretical and does not describe data splitting for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not mention any hardware specifications used for experiments.
Software Dependencies No The paper is theoretical and does not mention any software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup, including hyperparameters or training settings.