The phase diagram of approximation rates for deep neural networks
Authors: Dmitry Yarotsky, Anton Zhevnerchuk
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We explore the phase diagram of approximation rates for deep neural networks and prove several new theoretical results. In the present paper we perform a systematic theoretical study of this question in the context of network expressiveness. |
| Researcher Affiliation | Academia | Dmitry Yarotsky Skolkovo Institute of Science and Technology d.yarotsky@skoltech.ru Anton Zhevnerchuk Skolkovo Institute of Science and Technology Anton.Zhevnerchuk@skoltech.ru |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | No | This is a theoretical paper and does not mention the use of any datasets for training or provide access information for a publicly available dataset. |
| Dataset Splits | No | This is a theoretical paper and does not provide any specific dataset split information (e.g., train/validation/test percentages or counts) needed to reproduce data partitioning. |
| Hardware Specification | No | This is a theoretical paper and does not provide any specific hardware details used for running experiments. |
| Software Dependencies | No | This is a theoretical paper and does not provide any specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate experiments. |
| Experiment Setup | No | This is a theoretical paper and does not contain any specific experimental setup details (e.g., hyperparameters, training configurations, or system-level settings). |