Rotated Binary Neural Network

Authors: Mingbao Lin, Rongrong Ji, Zihan Xu, Baochang Zhang, Yan Wang, Yongjian Wu, Feiyue Huang, Chia-Wen Lin

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on CIFAR-10 and Image Net demonstrate the superiorities of RBNN over many state-of-the-arts. Our source code, experimental settings, training logs and binary models are available at https://github.com/lmbxmu/RBNN.
Researcher Affiliation Collaboration 1 Media Analytics and Computing Lab, Department of Artificial Intelligence, School of Informatics, Xiamen University 2Institute of Artificial Intelligence, Xiamen University 3 Peng Cheng Lab 4 Beihang University 5 Pinterest 6 Tencent Youtu Lab 7 National Tsing Hua University
Pseudocode No The paper describes algorithms and optimization steps in narrative text and mathematical formulations but does not include a formal pseudocode block or algorithm section.
Open Source Code Yes Our source code, experimental settings, training logs and binary models are available at https://github.com/lmbxmu/RBNN.
Open Datasets Yes In this section, we evaluate our RBNN on CIFAR-10 [25] using Res Net-18/20 [19] and VGGsmall [44], and on Image Net [11] using Res Net-18/34 [19].
Dataset Splits No The paper mentions evaluating on CIFAR-10 and ImageNet, which have standard splits, but it does not explicitly state the train/validation/test split percentages or sample counts used in their experiments.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU model, CPU type).
Software Dependencies No The paper states, "We implement RBNN with Pytorch and the SGD is adopted as the optimizer," but it does not specify version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes Following the compared methods, all convolutional and fully-connected layers except the first and last ones are binarized. We implement RBNN with Pytorch and the SGD is adopted as the optimizer. Also, for fair comparison, we only apply the classification loss during training. ... Tmin = 2, Tmax = 1 in our implementation, E is the number of training epochs and e represents the current epoch.