Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Quasi-Equivalence between Width and Depth of Neural Networks

Authors: Fenglei Fan, Rongjie Lai, Ge Wang

JMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Based on a symmetric consideration, we investigate if the design of artificial neural networks should have a directional preference, and what the mechanism of interaction is between the width and depth of a network. Inspired by the De Morgan law, we address this fundamental question by establishing a quasi-equivalence between the width and depth of Re LU networks. We formulate two transforms for mapping an arbitrary Re LU network to a wide Re LU network and a deep Re LU network respectively, so that the essentially same capability of the original network can be implemented. Based on our findings, a deep network has a wide equivalent, and vice versa, subject to an arbitrarily small error. Our main contribution is the establishment of the width-depth quasi-equivalence of neural networks. We summarize our key results on Re LU networks in Table 1.
Researcher Affiliation Academia Feng-Lei Fan EMAIL Department of Biomedical Engineering School of Engineering Biomedical Imaging Center Center for Biotechnology and Interdisciplinary Studies Rensselaer Polytechnic Institute Troy, NY 12180, USA Rongjie Lai EMAIL Department of Mathematics Rensselaer Polytechnic Institute Troy, NY 12180, USA Ge Wang EMAIL Department of Biomedical Engineering School of Engineering Biomedical Imaging Center Center for Biotechnology and Interdisciplinary Studies Rensselaer Polytechnic Institute Troy, NY 12180, USA
Pseudocode No The paper describes mathematical formulations, theorems, and proofs for network transformations and equivalences, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code for the methodology described, nor does it provide a link to a code repository. It is a theoretical paper focusing on mathematical proofs.
Open Datasets No This paper is theoretical and focuses on mathematical proofs and network properties. It does not perform experiments on any datasets, therefore no dataset access information is provided.
Dataset Splits No This paper is theoretical and does not involve empirical experiments with datasets. Therefore, no dataset split information is provided.
Hardware Specification No This paper is theoretical and focuses on mathematical proofs of neural network properties. It does not describe any experimental implementations or hardware used.
Software Dependencies No This paper is theoretical and focuses on mathematical proofs and properties of neural networks. It does not describe any software implementation or dependencies with version numbers.
Experiment Setup No This paper is theoretical and focuses on mathematical proofs. It does not describe any experimental setup, hyperparameters, or training configurations.