Deep Neural Network Quantization via Layer-Wise Optimization Using Limited Training Data
Authors: Shangyu Chen, Wenya Wang, Sinno Jialin Pan3329-3336
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmark deep models are conducted to demonstrate the effectiveness of our proposed method using 1% of CIFAR10 and Image Net datasets. |
| Researcher Affiliation | Academia | Shangyu Chen,1 Wenya Wang,1 Sinno Jialin Pan1 1Nanyang Technological University schen025@e.ntu.edu.sg, wangwy@ntu.edu.sg, sinnopan@ntu.edu.sg |
| Pseudocode | Yes | Algorithm 1 Layer-wise Unsupervised Network Quantization |
| Open Source Code | Yes | Codes are available in: https://github.com/csyhhu/L-DNQ |
| Open Datasets | Yes | Two benchmark datasets are used including Image Net ILSVRC-2012 and CIFAR-10. |
| Dataset Splits | Yes | 500 training instances in CIFAR-10 and 12,800 in Image Net are randomly sampled to simulate the scenario of limited instances. ... For fair comparison with training-based quantization, we reduce training data to 1% of the original training dataset. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU model, CPU type) used for running the experiments. |
| Software Dependencies | No | The paper mentions general tools and frameworks (e.g., 'deep learning framework'), but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | 500 training instances in CIFAR-10 and 12,800 in Image Net are randomly sampled to simulate the scenario of limited instances. All experiments are conducted 5 times and the average result is reported. ... For fair comparison with training-based quantization, we reduce training data to 1% of the original training dataset. ... L-DNQ adopts the following quantization intervals: Ωl = αl {0, 20, 21, 22... 2b} for each layer. |