ChipNet: Budget-Aware Pruning with Heaviside Continuous Approximations
Authors: Rishabh Tiwari, Udbhav Bamba, Arnav Chavan, Deepak Gupta
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that Chip Net outperforms state-of-the-art structured pruning methods by remarkable margins of up to 16.1% in terms of accuracy. |
| Researcher Affiliation | Collaboration | Transmute AI Research, The Netherlands Indian Institute of Technology, ISM Dhanbad, India Informatics Institute, University of Amsterdam, The Netherlands |
| Pseudocode | Yes | Algorithm 1: Chip Net Pruning Approach |
| Open Source Code | Yes | 1Code is publicly available at https://github.com/transmute AI/Chip Net |
| Open Datasets | Yes | For datasets, we have chosen CIFAR-10/100 (Krizhevsky, 2009) and Tiny Image Net (Wu et al.). |
| Dataset Splits | Yes | Finally, the model with best performance on the validation set is chosen for fine tuning. |
| Hardware Specification | Yes | All experiments were run on a Google Cloud Platform instance with a NVIDIA V100 GPU (16GB), 16 GB RAM and 4 core processor. |
| Software Dependencies | No | The paper mentions optimizers like 'Adam W' and 'SGD' but does not provide specific version numbers for these or any other software dependencies like deep learning frameworks or programming languages. |
| Experiment Setup | Yes | For the combined loss L in Eq. 1, weights α1 and α2 are set to 10 and 30, respectively, across all experiments. [...] WRN-26-12, Mobile Net V2, Res Net-50, Res Net-101, Res Net-110 were trained with batch size of 128 at initial learning rate of 5 10 2 using SGD optimizer with momentum 0.9 and weight decay 10 3. We use step learning rate strategy to decay learning rate by 0.5 after every 30 epochs. |