Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
On the infinite-depth limit of finite-width neural networks
Authors: Soufiane Hayou
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluations of these theoretical findings are also provided. Empirical verification of Proposition 2. (a), (b), (d) Histograms of log(YL) and based on N = 5000 simulations for depths L {5, 50, 100} with Y0 = 1. Estimated density (Gaussian kernel estimate) and theoretical density (Gaussian) are illustrated on the same graphs. |
| Researcher Affiliation | Academia | Soufiane Hayou EMAIL Department of Mathematics National University of Singapore |
| Pseudocode | No | The paper primarily focuses on theoretical mathematical analysis and proofs, and does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about the release of source code, nor does it include links to code repositories or mention code in supplementary materials. |
| Open Datasets | No | The paper describes theoretical analysis and empirical simulations of neural network behavior. It does not mention the use of or provide access information for any publicly available datasets for its experiments. |
| Dataset Splits | No | The paper focuses on theoretical models and simulations rather than using external datasets, and therefore, does not provide any information regarding dataset splits. |
| Hardware Specification | No | The paper describes theoretical analysis and numerical simulations, but it does not specify any hardware details (e.g., GPU/CPU models, memory amounts) used for running these simulations. |
| Software Dependencies | No | The paper discusses mathematical models and simulations of neural networks but does not provide any specific software dependencies or version numbers (e.g., programming languages, libraries, or frameworks) used for its implementation. |
| Experiment Setup | Yes | Yl = Yl 1 + 1 L WlĪ(Yl 1), l = 1, . . . , L, (1) where Ī : R R is the activation function, L 1 is the network depth, Win Rn d, and Wl Rn n is the weight matrix in the lth layer. We assume that the weights are randomly initialized with iid Gaussian variables W ij l N(0, 1 n), W ij in N(0, 1 d). For the sake of simplification, we only consider networks with no bias, and we omit the dependence of Yl on n in the notation. |