An Information-Theoretic Framework for Deep Learning
Authors: Hong Jun Jeon, Benjamin Van Roy
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we propose a novel information-theoretic framework with its own notions of regret and sample complexity for analyzing the data requirements of machine learning. We use this framework to study the sample complexity of learning from data generated by deep ReLU neural networks and deep networks that are infinitely wide but have a bounded sum of weights. We establish that the sample complexity of learning under these data generating processes is at most linear and quadratic, respectively, in network depth. ... The main contribution of this paper is an elegant and intuitive information-theoretic framework for analyzing machine learning problems. We demonstrate its promise by deriving the two aforementioned theoretical results pertaining to deep neural networks which were not possible to show with existing analysis methods. |
| Researcher Affiliation | Academia | Hong Jun Jeon Department of Computer Science Stanford University Stanford, CA 94305 hjjeon@stanford.edu; Benjamin Van Roy Department of Electrical Engineering Department of Management Science and Engineering Stanford University Stanford, CA 94305 bvr@stanford.edu |
| Pseudocode | No | The paper is theoretical and focuses on mathematical derivations and proofs; it does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code. The checklist states '[N/A]' for code and datasets. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments on datasets, thus no information about public dataset availability for training is provided. The checklist indicates '[N/A]' for data. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments, thus no dataset split information (train/validation/test) is provided. |
| Hardware Specification | No | The paper is purely theoretical and does not describe any experiments that would require hardware. The checklist indicates '[N/A]' for compute resources. |
| Software Dependencies | No | The paper is theoretical and does not describe any experiments that would require specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not detail any experimental setup, hyperparameters, or training configurations. |