On the spectral bias of two-layer linear networks
Authors: Aditya Vardhan Varre, Maria-Luiza Vladarean, Loucas PILLAUD-VIVIEN, Nicolas Flammarion
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We support our findings with numerical experiments illustrating the phenomena. (Abstract) and We consider a regression problem on synthetic data... In Figure 1a, we show the evolution... In figure 1b, we depict the time evolution of singular values... we present a toy experiment for Re LU networks (see Figure 2 and further details in Figure 5). This property is empirically verified in Figure 2. (Section 4 and its subsections). |
| Researcher Affiliation | Academia | Aditya Varre EPFL aditya.varre@epfl.ch Maria-Luiza Vladarean EPFL maria-luiza.vladarean@epfl.ch Loucas Pillaud-Vivien Courant Institute of Mathematics, NYU / Flatiron Institute lpillaudvivien@flatironinstitute.org Nicolas Flammarion EPFL nicolas.flammarion@epfl.ch |
| Pseudocode | No | The paper contains mathematical derivations and descriptions of processes but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statements or links indicating that open-source code for the described methodology is available. |
| Open Datasets | Yes | We consider a regression problem on synthetic data, with n = 5 samples of Gaussian data in R10 (d = 10) and the labels in R3 (k = 3) generated by a ground truth β Rd k . (Section 4, Experiments) and In figure 1b, we depict the time evolution of singular values for GD and LNGD on a scalar regression problem with orthogonal data in R5 (n, d = 5)... (Section 4, Experiments). |
| Dataset Splits | No | The paper mentions using 'synthetic data' for experiments but does not provide specific details on how this data was split into training, validation, and test sets, nor does it reference any standard predefined splits. |
| Hardware Specification | Yes | The experiments were run on a 16-GB RAM Apple M1 mac with OS Ventura 13.3.1. |
| Software Dependencies | No | While the paper describes mathematical models and numerical simulations (e.g., 'discretize the SDE (4.1) with a step-size'), it does not specify any software libraries, frameworks, or their version numbers used for implementation. |
| Experiment Setup | Yes | We consider a network with width l = 200. In Figure 1a, we show the evolution of the top-4 singular values of the hidden layer W1. We use orthogonal initialization for the network with the two scales of initialization γ = 1, 10 4. To complement this, we also consider a Gaussian initialization with variance 0.01 specifically, we initialize the inner layer with d = 10 Gaussian random vectors in Rl. (Section 4, Experiments) and Further details on hyper-parameters can be found in the Appendix. (Section 4, Experiments) and We discretize the SDE (4.1) with a step-size 1/t. (Appendix A). |