Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Deep Limits and a Cut-Off Phenomenon for Neural Networks

Authors: Benny Avelin, Anders Karlsson

JMLR 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the context of independent, identically distributed (i.i.d.) random layers, which corresponds to the random initialization of the weights in the network, we perform a few experiments where we observe a cut-ophenomenon. ... In the examples that we will simulate below, the limiting distribution is actually the pointmass at 0... The result of the simulation can be found in Figure 2
Researcher Affiliation Academia Benny Avelin EMAIL Department of Mathematics University of Uppsala Box 256, 751 05 Uppsala, Sweden Anders Karlsson EMAIL ... Universit e de Gen eve Case Postale 64, 1211 Gen eve 4, Switzerland
Pseudocode No The paper contains mathematical formulations and theorems, but no explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor any structured code-like procedures.
Open Source Code No The paper does not provide any explicit statement about the release of source code, nor does it include links to any code repositories.
Open Datasets No In the examples that we will simulate below, the limiting distribution is actually the pointmass at 0, and to make the total variation distance easier to dene we work with nite precision, which makes the state-space nite. ... This indicates a simulated environment rather than the use of an external, open dataset. The authors construct their own setup for simulation.
Dataset Splits No The paper describes a simulation-based study using randomly initialized neural networks and synthetic data generation, rather than relying on external datasets with predefined splits.
Hardware Specification No The paper describes simulation results in Section 7, but it does not provide any specific details about the hardware (e.g., GPU/CPU models, memory specifications) used to perform these simulations.
Software Dependencies No The paper does not specify any particular software, libraries, or their version numbers that were used for implementing the described models or conducting the simulations.
Experiment Setup Yes Consider the following simple Markov chain of neural network type (with heuristic initialization, see Glorot and Bengio (2010)) Xi+1 = tanh(Wi Xi), i = 0,... where X0 = 0,X0 RN, where Wi are i.i.d. Wi unif([ 1 N ] N ), and the Tan H is applied componentwise (as is customary). ... where we worked with a nite precision of 0.001 and measure the total variation distance to the point-mass at 0.