Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win

Authors: Utku Evci, Yani Ioannou, Cem Keskin, Yann Dauphin6577-6586

AAAI 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental investigation results in the following insights:
Researcher Affiliation Collaboration 1 Google, 2 University of Calgary, 3 Facebook EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No No explicit pseudocode or algorithm blocks are provided.
Open Source Code Yes Implementation of our sparse initialization and code for reproducing our experiments can be found at https://github.com/googleresearch/rigl/tree/master/rigl/rigl_tf2.
Open Datasets Yes Our experiments include the following settings: Le Net5 on MNIST, VGG16 on Image Net-2012 and Res Net-50 on Image Net-2012.
Dataset Splits No The paper mentions using MNIST and ImageNet datasets and discusses training and testing, but does not explicitly provide details about specific training/validation/test splits (e.g., percentages or sample counts) for reproducibility.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU/CPU models or memory specifications.
Software Dependencies No The GitHub link implies TensorFlow 2 ('rigl_tf2'), but the paper does not explicitly list software dependencies with version numbers within the text.
Experiment Setup No Experimental details can be found in in the extended version of our work.