Provable Guarantees for Neural Networks via Gradient Feature Learning
Authors: Zhenmei Shi, Junyi Wei, Yingyu Liang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our paper is purely theoretical in nature, and thus we do not anticipate an immediate negative ethical impact. |
| Researcher Affiliation | Academia | Zhenmei Shi , Junyi Wei , Yingyu Liang University of Wisconsin, Madison zhmeishi@cs.wisc.edu,jwei53@wisc.edu,yliang@cs.wisc.edu |
| Pseudocode | Yes | Algorithm 1 Network Training via Gradient Descent |
| Open Source Code | No | The paper is theoretical and does not mention providing open-source code for its framework or any associated implementation. |
| Open Datasets | No | The paper analyzes theoretical data distributions (e.g., mixtures of Gaussians, parity functions) rather than using named publicly available datasets for empirical training. No specific access information to a public dataset is provided. |
| Dataset Splits | No | The paper is theoretical and works with mathematical models of data distributions rather than empirical datasets, so there are no specified training, validation, or test splits in the experimental sense. |
| Hardware Specification | No | The paper is purely theoretical and does not conduct empirical experiments, so it does not mention any hardware specifications. |
| Software Dependencies | No | The paper is purely theoretical and does not conduct empirical experiments, so it does not list any software dependencies with version numbers. |
| Experiment Setup | No | The paper discusses "proper hyper-parameter values" within its theoretical framework (e.g., in Theorem 3.12), and provides conditions for these values, but it does not specify concrete numerical values for an experimental setup. |