Binarized Neural Networks
Authors: Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To validate the effectiveness of BNNs, we conducted two sets of experiments on the Torch7 and Theano frameworks. |
| Researcher Affiliation | Academia | (1) Technion, Israel Institute of Technology. (2) Université de Montréal. (3) Columbia University. (4) CIFAR Senior Fellow. |
| Pseudocode | Yes | Algorithm 1: Training a BNN. |
| Open Source Code | Yes | The code for training and running our BNNs is available on-line (both Theano1 and Torch framework2). |
| Open Datasets | Yes | On both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10 and SVHN datasets. |
| Dataset Splits | Yes | Figure 1: Training curves for different methods on CIFAR-10 dataset. The dotted lines represent the training costs (square hinge losses) and the continuous lines the corresponding validation error rates. |
| Hardware Specification | Yes | The first three columns represent the time it takes to perform a 8192 8192 8192 (binary) matrix multiplication on a GTX750 Nvidia GPU, depending on which kernel is used. |
| Software Dependencies | Yes | Torch7: A matlab-like environment for machine learning. |
| Experiment Setup | No | Implementation details are reported in Appendix A and code for both frameworks is available online. However, these specific details (e.g., hyperparameters like learning rate, batch size) are not provided in the main body of the paper. |