Robust Pruning at Initialization

Authors: Soufiane Hayou, Jean-Francois Ton, Arnaud Doucet, Yee Whye Teh

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This allows us to propose novel principled approaches which we validate experimentally on a variety of NN architectures.
Researcher Affiliation Academia Soufiane Hayou, Jean-Francois Ton, Arnaud Doucet & Yee Whye Teh Department of Statistics University of Oxford United Kingdom {soufiane.hayou, ton, doucet, teh}@stats.ox.ac.uk
Pseudocode Yes Algorithm 1 Rescaling trick for FFNN
Open Source Code No The paper does not include any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes We validate the results on MNIST, CIFAR10, CIFAR100 and Tiny Image Net.
Dataset Splits Yes We validate the results on MNIST, CIFAR10, CIFAR100 and Tiny Image Net.
Hardware Specification No The paper does not specify any hardware used for running the experiments.
Software Dependencies No The paper mentions 'SGD' as an optimizer and activation functions like 'tanh', 'ReLU', 'ELU', but does not list any specific software libraries, frameworks, or their version numbers.
Experiment Setup Yes We use SGD with batchsize 100 and learning rate 10 3, which we found to be optimal using a grid search with an exponential scale of 10.