Globally Gated Deep Linear Networks
Authors: Qianyi Li, Haim Sompolinsky
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our theory accurately captures the behavior of finite width GGDLNs trained with gradient descent (GD) dynamics. |
| Researcher Affiliation | Academia | 1Biophysics Graduate Program, Harvard University 2Center for Brain Science, Harvard University 3Edmond and Lily Safra Center for Brain Sciences, Hebrew University |
| Pseudocode | No | The paper describes mathematical derivations and theoretical concepts but does not include any pseudocode or explicitly labeled algorithm blocks. |
| Open Source Code | Yes | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] In supplementary material |
| Open Datasets | Yes | In Fig.3 , we show parameter regimes where the bias can increase (Fig.3 (a-c)) or decrease (Fig.3 (d-f)) with σ on MNIST dataset [19] (Appendix C.3 ). |
| Dataset Splits | Yes | Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Appendix C |
| Hardware Specification | Yes | Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix C |
| Software Dependencies | Yes | Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Appendix C |
| Experiment Setup | Yes | Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Appendix C |