Sparsifying Networks via Subdifferential Inclusion
Authors: Sagar Verma, Jean-Christophe Pesquet
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct various experiments to validate the effectiveness of SIS in terms of test accuracy vs. sparsity and inference time FLOPs vs. sparsity by comparing against Rig L (Evci et al., 2020). |
| Researcher Affiliation | Academia | Sagar Verma 1 Jean-Christophe Pesquet 1 1Université Paris-Saclay, Centrale Supélec, Inria, Centre de Vision Numérique. Correspondence to: Sagar Verma <sagar.verma@centralesupelec.fr>. |
| Pseudocode | Yes | Algorithm 1 Douglas-Rachford algorithm for network compression |
| Open Source Code | Yes | Project page: https://sagarverma.github.io/compression |
| Open Datasets | Yes | We compare SIS with competitive baselines on CIFAR10/100 for three different sparsity regimes 90%, 95%, 98%, and the results are listed in Table 2. |
| Dataset Splits | No | The paper mentions using "test accuracy" and "20% samples per class were used during pruning phase" for some experiments, but does not explicitly provide the specific training/validation/test dataset splits (percentages or counts) required to fully reproduce the data partitioning for all experiments. |
| Hardware Specification | No | The paper mentions using "Graphical Processing Units (GPUs)" generally but does not provide specific hardware details like exact GPU or CPU models, processor types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details such as library or solver names with version numbers. |
| Experiment Setup | Yes | 20% samples per class were used during pruning phase of all the methods and were run for 40 epochs. |