Reverse-engineering deep ReLU networks
Authors: David Rolnick, Konrad Kording
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the success of our algorithm on both untrained and trained networks. In keeping with literature on Re LU network initialization (He et al., 2015; Hanin & Rolnick, 2018), networks were initialized using i.i.d. normal weights with variance 2/fan-in and i.i.d. normal biases with unit variance. Networks were trained on either the MNIST dataset (nin = 784, nout = 10) or a memorization task of 1000 datapoints (nin = 10, nout = 2) with coordinates drawn i.i.d. from a unit Gaussian and given arbitrary binary labels. Training was performed using the Adam optimizer (Kingma & Ba, 2014) and a cross-entropy loss applied to the softmax of the final layer, over 20 epochs for MNIST and 1000 epochs for the memorization task. The trained networks (when sufficiently large) were able to attain near-perfect accuracy. |
| Researcher Affiliation | Academia | 1University of Pennsylvania, Philadelphia, PA, USA. Correspondence to: David Rolnick <drolnick@seas.upenn.edu>. |
| Pseudocode | Yes | Algorithm 1 The first layer |
| Open Source Code | No | The paper does not provide any explicit statements about open-source code availability or links to a code repository. |
| Open Datasets | Yes | Networks were trained on either the MNIST dataset (nin = 784, nout = 10) or a memorization task of 1000 datapoints (nin = 10, nout = 2) with coordinates drawn i.i.d. from a unit Gaussian and given arbitrary binary labels. |
| Dataset Splits | No | The paper mentions training on MNIST and a memorization task, but does not provide specific details on dataset splits (e.g., percentages, counts, or predefined splits) for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions the 'Adam optimizer' but does not specify any software versions or dependencies with version numbers (e.g., Python, deep learning frameworks). |
| Experiment Setup | Yes | In keeping with literature on Re LU network initialization (He et al., 2015; Hanin & Rolnick, 2018), networks were initialized using i.i.d. normal weights with variance 2/fan-in and i.i.d. normal biases with unit variance. [...] Training was performed using the Adam optimizer (Kingma & Ba, 2014) and a cross-entropy loss applied to the softmax of the final layer, over 20 epochs for MNIST and 1000 epochs for the memorization task. |