Deep, Skinny Neural Networks are not Universal Approximators

Authors: Jesse Johnson

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proof of Theorem 1 consists of two steps... In the first step, described in Section 5, we examine the family of functions defined by deep, skinny neural networks... We present experimental results that demonstrate the constraints in Section 7... To demonstrate the effect of Theorem 1, we used the Tensor Flow Neural Network Playground (1) to train two different networks on a standard synthetic dataset with one class centered at the origin of the two-dimensional plane, and the other class forming a ring around it.
Researcher Affiliation Academia Jesse Johnson Sanofi jejo.math@gmail.com. Only a personal email address is provided, so a clear institutional affiliation cannot be determined.
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper mentions using
Open Datasets No The paper mentions using a
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using
Experiment Setup Yes The experimental setup describes the neural network architectures used: 'The first network has six two-dimensional hidden layers...' and 'The second network has a single hidden layer of dimension three...'