Adaptive Quantization for Deep Neural Network
Authors: Yiren Zhou, Seyed-Mohsen Moosavi-Dezfooli, Ngai-Man Cheung, Pascal Frossard
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we show empirical results that validate our assumptions in previous sections, and evaluate the proposed bit-width optimization approach. All codes are implemented using Mat Conv Net (Vedaldi and Lenc 2015). All experiments are conducted using a Dell workstation with E5-2630 CPU and Titan X Pascal GPU. Fig. 6 shows the quantization results using our method, SQNR-based method (Lin, Talathi, and Annapureddy 2016), and equal bit-width quantization. |
| Researcher Affiliation | Academia | 1Singapore University of Technology and Design (SUTD) 2 Ecole Polytechnique F ed erale de Lausanne (EPFL) |
| Pseudocode | No | The paper mentions 'The detailed algorithm about the above procedure can be found in Supplementary Material' but does not include any pseudocode or algorithm blocks in the main body. |
| Open Source Code | No | The paper states 'All codes are implemented using Mat Conv Net (Vedaldi and Lenc 2015)' but does not provide a link or explicit statement about the availability of their own source code for the described methodology. |
| Open Datasets | Yes | We apply this method to quantize different models that have been pre-trained on Image Net dataset and achieve good quantization results on all models. The quantized model is then tested on the validation set of Imagenet (Krizhevsky, Sutskever, and Hinton 2012), which contains 50000 images in 1000 classes. |
| Dataset Splits | Yes | The quantized model is then tested on the validation set of Imagenet (Krizhevsky, Sutskever, and Hinton 2012), which contains 50000 images in 1000 classes. |
| Hardware Specification | Yes | All experiments are conducted using a Dell workstation with E5-2630 CPU and Titan X Pascal GPU. |
| Software Dependencies | Yes | All codes are implemented using Mat Conv Net (Vedaldi and Lenc 2015). |
| Experiment Setup | Yes | For all three methods, we use uniform quantization for each layer. First, calculate the mean value of adversarial noise for the dataset: meanr = 1 |D| x D (z(1) z(2))2 . Then, fix Δacc value. For example, Δacc = 10%. First, for each layer i, fix bi value. For example, use bi = 10. |