Compressing Neural Networks using the Variational Information Bottleneck

Authors: Bin Dai, Chen Zhu, Baining Guo, David Wipf

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper we focus on pruning individual neurons... We demonstrate state-of-the-art compression rates across an array of datasets and network architectures.
Researcher Affiliation Collaboration 1Institute for Advanced Study, Tsinghua University, Beijing, China 2Department of Computer Science, University of Maryland, USA 3Microsoft Research, Beijing, China.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks clearly labeled as such. Figure 1 shows a VIBNet Structure diagram, but it is not pseudocode.
Open Source Code Yes Code availabe at https://github.com/zhuchen03/VIBNet.
Open Datasets Yes MNIST (Le Cun, 1998)... CIFAR10 and CIFAR100 (Krizhevsky & Hinton, 2009).
Dataset Splits No The paper mentions 'test sets' but does not explicitly provide details about a validation set or how data was split for validation purposes (e.g., percentages, sample counts, or citations to predefined validation splits).
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., TensorFlow version, PyTorch version, specific Python packages), needed to replicate the experiment.
Experiment Setup Yes During training, we set the tradeoff parameter γ using a simple heuristic such that VIBNet can roughly match the best previously reported accuracy values. In doing so we obtain a meaningful calibration of the corresponding compression results. See (Dai et al., 2018) for full details regarding this and other aspects of our model training set-up.