Espresso: Efficient Forward Propagation for Binary Deep Neural Networks

Authors: Fabrizio Pedersoli, George Tzanetakis, Andrea Tagliasacchi

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally show that Espresso is significantly faster than existing implementations of optimized binary neural networks ( 2 orders of magnitude).
Researcher Affiliation Academia Fabrizio Pedersoli University of Victoria fpeder@uvic.ca George Tzanetakis University of Victoria gtzan@uvic.ca Andrea Tagliasacchi University of Victoria ataiya@uvic.ca
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Espresso is released under the Apache 2.0 license and is available at http://github.com/fpeder/espresso.
Open Datasets Yes The MNIST dataset (Le Cun et al., 1998) consists of 60K instances for training and, 10K instances for testing. The CIFAR-10 dataset (Krizhevsky et al., 2009), consists of 50K training instances and 10K testing instances of 32 32 3 color images.
Dataset Splits No The paper specifies training and testing instances for datasets but does not explicitly mention a validation set or split for reproducibility.
Hardware Specification Yes The execution times, averaged over 100 experiments, are obtained on a machine equipped with an NVIDIA Ge Force GTX 960 with 2GB of RAM, and a Intel R dual-Xeon R X5660 @ 2.80 GHz.
Software Dependencies No The paper mentions using the 'Open BLAS library (Xianyi et al.)' but does not specify a version number for it or any other software dependency.
Experiment Setup Yes In CPU mode, we configure the Open BLAS library for matrix multiplication to use all the 24 available cores. Since our interest is to asses the real-time performance of binary optimized DNNs, in those experiment we use a batch-size of one, and measure the averaged forward time for each image of the testing-sets for each dataset.