Structured Bayesian Pruning via Log-Normal Multiplicative Noise
Authors: Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, Dmitry P. Vetrov
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that our model leads to high group sparsity level and significant acceleration of convolutional neural networks with negligible accuracy drop. We demonstrate the performance of our method on Le Net and VGG-like architectures using MNIST and CIFAR-10 datasets. |
| Researcher Affiliation | Collaboration | 1National Research University Higher School of Economics 2Yandex 3Skolkovo Institute of Science and Technology |
| Pseudocode | No | The paper describes methods and procedures in detail but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code is available in Theano [7] and Lasagne, and also in Tensor Flow [1] (https://github.com/necludov/group-sparsity-sbp). |
| Open Datasets | Yes | We demonstrate the performance of our method on Le Net and VGG-like architectures using MNIST and CIFAR-10 datasets. |
| Dataset Splits | No | The paper mentions using MNIST and CIFAR-10 datasets for training and evaluation, but it does not specify explicit train/validation/test dataset splits (e.g., percentages, sample counts, or citations to predefined splits). |
| Hardware Specification | Yes | We report acceleration that was computed on CPU (Intel Xeon E5-2630), GPU (Tesla K40) |
| Software Dependencies | No | The paper mentions using 'Theano' and 'Tensor Flow' and provides general references, but it does not specify exact version numbers for these or any other software dependencies needed for reproducibility. |
| Experiment Setup | Yes | The truncation parameters a and b are the hyperparameters of our model... We use values a = 20 and b = 0 in all experiments. |