SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance
Authors: Edouard YVINEC, Arnaud Dapogny, Matthieu Cord, Kevin Bailly
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show through extensive validation on several datasets, architectures as well as pruning scenarios that the proposed method, dubbed SIn GE , significantly outperforms existing state-of-the-art DNN pruning methods. |
| Researcher Affiliation | Collaboration | Sorbonne Université1, CNRS, ISIR, f-75005, 4 Place Jussieu 75005 Paris, France Datakalab2, 114 boulevard Malesherbes, 75017 Paris, France |
| Pseudocode | Yes | Algorithm 1 SIn GE Algorithm |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | we evaluate our models on the two de facto standard datasets for architecture compression, i.e. Cifar10 [37] and Image Net [38]. |
| Dataset Splits | No | The paper uses standard datasets (Cifar10, ImageNet) but does not explicitly state the training, validation, or test split percentages or sample counts within the text. It only mentions 'standard evaluation metrics' which implies standard splits but doesn't specify them. |
| Hardware Specification | Yes | All experiments were performed on NVidia V100 GPU. |
| Software Dependencies | No | Our implementations are based on tensorflow and numpy python libraries. The paper does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We measured the different pruning criteria using random batches X of 64 training images for both Cifar10 and Image Net and fine-tuned the pruned models with batches of size 128 and 64 for Cifar10 and Image Net, respectively. The number of optimization steps varies from 1k to 5k on Cifar10 and from 5k to 50k on Image Net... for the former, we use µ = 0.9 and µ = 0.95 for Image Net and Cifar10, respectively. For unstructured pruning, we use µ = 0.8 for Image Net. |