SA-BNN: State-Aware Binary Neural Network

Authors: Chunlei Liu, Peng Chen, Bohan Zhuang, Chunhua Shen, Baochang Zhang, Wenrui Ding2091-2099

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on Image Net show that the proposed SA-BNN outperforms the current state-of-the-arts (e.g., Bi-Real Net) by more than 3% when using a Res Net architecture. Specifically, we achieve 61.7%, 65.5% and 68.7% Top-1 accuracy with Res Net-18, Res Net-34 and Res Net-50 on Image Net, respectively.
Researcher Affiliation Academia Chunlei Liu1,2, Peng Chen2, Bohan Zhuang3, Chunhua Shen2, Baochang Zhang1, Wenrui Ding1 1 Beihang University 2 The University of Adelaide 3 Monash University
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete statements about releasing open-source code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes We perform experiments on large-scale dataset Image Net (ILSVRC12) (Russakovsky et al. 2015)
Dataset Splits Yes We perform experiments on large-scale dataset Image Net (ILSVRC12) (Russakovsky et al. 2015), which contains approximately 1.2 million training images and 50K validation images from 1000 categories.
Hardware Specification No The paper does not specify the hardware used for running the experiments (e.g., specific GPU models, CPU types, or cloud computing instances with specifications).
Software Dependencies No The paper mentions using "Adam (Kingma and Ba 2014)" as an optimizer but does not provide version numbers for any software dependencies like programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We use Adam (Kingma and Ba 2014) with the momentum of 0.9 and set the weight-decay to be 0. For SA-BNNs with backbone Res Net-18, we run the training algorithm for 95 epochs with a batch size of 256. The learning rate starts from 0.001 and is decayed twice by multiplying 0.1 at the 75th and the 85th epoch. Besides, for SA-BNNs with backbone Res Net-34, the training process includes 90 epochs and the batch size is set to 256. The learning rate starts from 0.001 and is multiplied by 0.1 at the 60th and the 80th epoch, respectively. Moreover, for SA-BNNs with backbone Res Net-50, we run the training algorithm for 70 epochs with a batch size of 64. The learning rate starts from 0.0005 and is decayed twice by multiplying 0.1 at the 40th and the 60th epoch.