Convergence and Alignment of Gradient Descent with Random Backpropagation Weights
Authors: Ganlin Song, Ruitu Xu, John Lafferty
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the overparameterized setting, we prove that the error converges to zero exponentially fast, and also that regularization is necessary in order for the parameters to become aligned with the random backpropagation weights. Simulations are given that are consistent with this analysis and suggest further generalizations. |
| Researcher Affiliation | Academia | Ganlin Song Ruitu Xu John Lafferty Department of Statistics and Data Science Wu Tsai Institute Yale University {ganlin.song, ruitu.xu, john.lafferty}@yale.edu |
| Pseudocode | Yes | Algorithm 1 Feedback Alignment Input: Dataset {(xi, yi)}n i=1, step size η 1: initialize W, β and b as Gaussian 2: while not converged do 3: βr βr η p Pn i=1 eiψ(w r xi) 4: wr wr η p Pn i=1 eibrψ (w r xi)xi 5: for r [p] 6: end while |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | The MNIST dataset is available under the Creative Commons Attribution-Share Alike 3.0 license (Deng, 2012). |
| Dataset Splits | Yes | It consists of 60,000 training images and 10,000 test images of dimension 28 by 28. |
| Hardware Specification | Yes | We implement the feedback alignment procedure in Py Torch as an extension of the autograd module for backpropagation, and the training is done on V100 GPUs from internal clusters. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not provide a specific version number for the software dependency. |
| Experiment Setup | Yes | During training, we take step size η = 10 4 for linear networks and η = 10 3, 10 2 for Re LU and Tanh networks, respectively. |