Pruning Filters for Efficient ConvNets
Authors: Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf
ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and Res Net-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks. |
| Researcher Affiliation | Collaboration | Hao Li University of Maryland haoli@cs.umd.edu; Asim Kadav NEC Labs America asim@nec-labs.com; Igor Durdanovic NEC Labs America igord@nec-labs.com; Hanan Samet University of Maryland hjs@cs.umd.edu; Hans Peter Graf NEC Labs America hpg@nec-labs.com |
| Pseudocode | No | The paper describes the procedure for pruning filters using numbered steps but does not format it as a pseudocode or algorithm block. |
| Open Source Code | No | The paper states 'We implement our filter pruning method in Torch7 (Collobert et al. (2011))' but does not provide a link or explicit statement about releasing their own source code for the described methodology. |
| Open Datasets | Yes | We prune two types of networks: simple CNNs (VGG-16 on CIFAR-10) and Residual networks (Res Net-56/110 on CIFAR-10 and Res Net-34 on Image Net). |
| Dataset Splits | Yes | Table 1: Overall results. The best test/validation accuracy during the retraining process is reported. ... The evaluation is conducted in Torch7 with Titan X (Pascal) GPU and cu DNN v5.1, using a mini-batch size 128. As shown in Table 3, the saved inference time is close to the FLOP reduction. Note that the FLOP number only considers the operations in the Conv and FC layers, while some calculations such as Batch Normalization and other overheads are not accounted. |
| Hardware Specification | Yes | The evaluation is conducted in Torch7 with Titan X (Pascal) GPU and cu DNN v5.1, using a mini-batch size 128. |
| Software Dependencies | Yes | We implement our filter pruning method in Torch7 (Collobert et al. (2011)). The evaluation is conducted in Torch7 with Titan X (Pascal) GPU and cu DNN v5.1 |
| Experiment Setup | Yes | For retraining, we use a constant learning rate 0.001 and retrain 40 epochs for CIFAR-10 and 20 epochs for Image Net, which represents one-fourth of the original training epochs. ... using a mini-batch size 128. |