On Neuronal Capacity

Authors: Pierre Baldi, Roman Vershynin

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We define the capacity of a learning machine to be the logarithm of the number (or volume) of the functions it can implement. We review known results, and derive new results, estimating the capacity of several neuronal models: linear and polynomial threshold gates, linear and polynomial threshold gates with constrained weights (binary weights, positive weights), and Re LU neurons. We also derive some capacity estimates and bounds for fully recurrent networks, as well as feedforward networks.
Researcher Affiliation Academia Pierre Baldi Department of Computer Science University of California, Irvine Irvine, CA 92697 pfbaldi@uci.edu Roman Vershynin Department of Mathematics University of California, Irvine Irvine, CA 92697 rvershyn@uci.edu
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide any statement about releasing source code or links to a code repository.
Open Datasets No The paper is theoretical and does not conduct experiments with datasets, thus no information on dataset availability is provided.
Dataset Splits No The paper is theoretical and does not conduct experiments with datasets, thus no information on training/test/validation splits is provided.
Hardware Specification No The paper is theoretical and does not conduct experiments, thus no specific hardware details are mentioned.
Software Dependencies No The paper is theoretical and does not conduct experiments, thus no specific software dependencies with version numbers are mentioned.
Experiment Setup No The paper is theoretical and does not conduct experiments, thus no experimental setup details like hyperparameters or training configurations are provided.