Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Authors: Utku Evci, Yani Ioannou, Cem Keskin, Yann Dauphin6577-6586
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental investigation results in the following insights: |
| Researcher Affiliation | Collaboration | 1 Google, 2 University of Calgary, 3 Facebook evcu@google.com, yani.ioannou@ucalgary.ca, cem.keskin@fb.com, ynd@google.com |
| Pseudocode | No | No explicit pseudocode or algorithm blocks are provided. |
| Open Source Code | Yes | Implementation of our sparse initialization and code for reproducing our experiments can be found at https://github.com/googleresearch/rigl/tree/master/rigl/rigl_tf2. |
| Open Datasets | Yes | Our experiments include the following settings: Le Net5 on MNIST, VGG16 on Image Net-2012 and Res Net-50 on Image Net-2012. |
| Dataset Splits | No | The paper mentions using MNIST and ImageNet datasets and discusses training and testing, but does not explicitly provide details about specific training/validation/test splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU/CPU models or memory specifications. |
| Software Dependencies | No | The GitHub link implies TensorFlow 2 ('rigl_tf2'), but the paper does not explicitly list software dependencies with version numbers within the text. |
| Experiment Setup | No | Experimental details can be found in in the extended version of our work. |