Neural Networks with Small Weights and Depth-Separation Barriers

Authors: Gal Vardi, Ohad Shamir

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we focus on feedforward Re LU networks, and prove fundamental barriers to proving such results beyond depth 4, by reduction to open problems and natural-proof barriers in circuit complexity. Our paper is structured as follows: In Section 2 we provide notations and definitions, followed by our results in Section 3. We sketch our proof ideas in Section 4, with all proofs deferred to Appendix A.
Researcher Affiliation Academia Gal Vardi Weizmann Institute of Science gal.vardi@weizmann.ac.il Ohad Shamir Weizmann Institute of Science ohad.shamir@weizmann.ac.il
Pseudocode No The paper describes theoretical proofs and does not include any pseudocode or algorithm blocks.
Open Source Code No This is a theoretical paper presenting proofs; it does not mention providing open-source code for any methodology.
Open Datasets No This is a theoretical paper and does not mention using or providing access to any specific datasets for training or evaluation.
Dataset Splits No This is a theoretical paper and does not involve empirical validation with dataset splits.
Hardware Specification No This is a theoretical paper and does not mention any hardware specifications used for experiments.
Software Dependencies No This is a theoretical paper and does not mention any specific software dependencies or versions for experimental setup.
Experiment Setup No This is a theoretical paper that focuses on mathematical proofs and does not describe any experimental setup or hyperparameters.