Attribution Preservation in Network Compression for Reliable Network Interpretation
Authors: Geondo Park, June Yong Yang, Sung Ju Hwang, Eunho Yang
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our algorithm both quantitatively and qualitatively on diverse compression methods. |
| Researcher Affiliation | Collaboration | KAIST1, AITRICS2, South Korea |
| Pseudocode | No | The paper presents its framework and methods using mathematical formulations and descriptive text, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described, such as a repository link or a statement about code being available in supplementary materials. |
| Open Datasets | Yes | We utilize the held out 1,449 images with segmentation masks in the PASCAL VOC 2012 dataset. [28] |
| Dataset Splits | No | The paper mentions training on the PASCAL VOC 2012 dataset but does not specify explicit training/validation splits or percentages required to reproduce the experiment setup. |
| Hardware Specification | No | The paper does not provide any specific hardware details (such as GPU/CPU models, memory, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies, such as library names with version numbers, needed to replicate the experiments. |
| Experiment Setup | Yes | For unstructured pruning...We use pruning rate ρw = 0.2. After pruning is complete, the remaining sparse network is fine-tuned for 30 epochs on the same dataset. The whole process is then iterated 16 times to produce the final compressed network with pruning rate ρ = 0.97. ... For structured pruning, we use the ℓ1 structured pruning proposed in [2], in which whole filters are pruned according to the magnitude of each filter s ℓ1 norm. ... We use channel pruning rate ρc = 0.7. |