Incremental Network Quantization: Towards Lossless CNNs with Low-precision Weights

Authors: Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on the Image Net classification task using almost all known deep CNN architectures including Alex Net, VGG-16, Google Net and Res Nets well testify the efficacy of the proposed method.
Researcher Affiliation Industry Aojun Zhou , Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen Intel Labs China {aojun.zhou, anbang.yao, yiwen.guo, lin.x.xu, yurong.chen}@intel.com
Pseudocode Yes Algorithm 1 Incremental network quantization for lossless CNNs with low-precision weights.
Open Source Code No The code will be made publicly available.
Open Datasets Yes Image Net dataset has about 1.2 million training images and 50 thousand validation images. Each image is annotated as one of 1000 object classes.
Dataset Splits Yes Image Net dataset has about 1.2 million training images and 50 thousand validation images.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper mentions software like Caffe and Torch ('Since our method is implemented with Caffe, we make use of an open source tool4 to convert the pre-trained Res Net-18 model from Torch to Caffe.'), but does not specify version numbers for these or other dependencies.
Experiment Setup Yes Alex Net: Alex Net has 5 convolutional layers and 3 fully-connected layers. We set the accumulated portions of quantized weights at iterative steps as {0.3, 0.6, 0.8, 1}, the batch size as 256, the weight decay as 0.0005, and the momentum as 0.9.