Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Random Fully Connected Neural Networks as Perturbatively Solvable Hierarchies
Authors: Boris Hanin
JMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We study the distribution of fully connected neural networks with Gaussian random weights/biases and L hidden layers, each of width proportional to a large parameter n. For polynomially bounded non-linearities we give sharp estimates in powers of 1/n for the joint cumulants of the network output and its derivatives. We further show that network cumulants form a perturbatively solvable hierarchy in powers of 1/n. |
| Researcher Affiliation | Academia | Boris Hanin EMAIL Department of Operations Research and Financial Engineering Princeton University Princeton, NJ 08544, USA |
| Pseudocode | No | The paper contains mathematical derivations, theorems, and proofs, but no structured pseudocode or algorithm blocks are present. |
| Open Source Code | No | The paper does not provide explicit statements about releasing source code for the methodology or links to code repositories. The provided link (http://jmlr.org/papers/v25/23-0643.html) is for attribution requirements of the paper's license, not for source code. |
| Open Datasets | No | This paper is theoretical and does not conduct experiments on datasets. While it mentions applications of neural networks in general (e.g., 'self-driving cars (Krizhevsky et al. (2012))'), it does not specify or provide access information for any datasets used in its own research. |
| Dataset Splits | No | This paper is theoretical and does not involve experiments using datasets, therefore, no dataset split information is provided. |
| Hardware Specification | No | The paper is theoretical and does not describe any experimental procedures or hardware used for computations. |
| Software Dependencies | No | The paper is theoretical and focuses on mathematical derivations, thus it does not specify any software dependencies with version numbers. |
| Experiment Setup | No | The paper focuses on theoretical analysis and mathematical proofs of neural network properties. It does not describe an experimental setup, hyperparameters, or training configurations. |