DropNet: Reducing Neural Network Complexity via Iterative Pruning
Authors: Chong Min John Tan, Mehul Motani
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we show that Drop Net is robust across diverse scenarios, including MLPs and CNNs using the MNIST, CIFAR-10 and Tiny Image Net datasets. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, National University of Singapore. |
| Pseudocode | Yes | Algorithm 1 Iterative Pruning Algorithm |
| Open Source Code | Yes | To encourage further research on iterative pruning techniques, the source code used for our experiments is publicly available at https://github.com/tanchongmin/Drop Net. |
| Open Datasets | Yes | we test it empirically using MLPs and CNNs on MNIST (Le Cun et al., 2010), CIFAR-10 (Krizhevsky et al., 2009) and Tiny Image Net (taken from https://tinyimagenet.herokuapp.com, results in Supplementary Material) datasets. |
| Dataset Splits | Yes | For MNIST, the dataset is split into 54000 training, 6000 validation and 10000 testing samples. For CIFAR-10, the dataset is split into 45000 training, 5000 validation and 10000 testing samples. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using "off-the-shelf machine learning libraries" but does not specify any software names with version numbers required for replication. |
| Experiment Setup | Yes | The optimization function used is SGD with a learning rate of 0.1. Training Cycles: The masks are applied at the start of each training cycle, which comprises 100 epochs, with early stopping using validation loss with patience of 5 epochs. Over each training cycle, a fraction p = 0.2 of the nodes are dropped. |