The power of deeper networks for expressing natural functions

Authors: David Rolnick, Max Tegmark

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results apply to standard feedforward neural networks and are borne out by empirical tests. We empirically tested Conjecture 5.2 by training ANNs to predict the product of input values x1, . . . , xn with n = 20
Researcher Affiliation Academia David Rolnick, Max Tegmark Massachusetts Institute of Technology {drolnick, tegmark}@mit.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks. Methods are described in prose.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets No The paper states: "Input variables xi were drawn uniformly at random from the interval [0, 2]", indicating a synthetically generated dataset rather than a publicly available one with concrete access information.
Dataset Splits No The paper mentions training ANNs and empirical tests but does not specify any dataset splits (e.g., train/validation/test percentages or sample counts).
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions the "Ada Delta optimizer (Zeiler, 2012)" and activation functions like "tanh(x)" and "rectified linear units (Re LUs)", but does not provide version numbers for any software libraries or frameworks used.
Experiment Setup Yes The networks were trained using the Ada Delta optimizer (Zeiler, 2012) to minimize the absolute value of the difference between the predicted and actual values. Input variables xi were drawn uniformly at random from the interval [0, 2], so that the expected value of the output would be of manageable size.