Adaptive Quantization of Neural Networks
Authors: Soroosh Khoram, Jing Li
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on MNIST, CIFAR, and SVHN datasets showed that the proposed method can achieve near or better than state-of-the-art reduction in model size with similar error rates. |
| Researcher Affiliation | Academia | Soroosh Khoram Department of Electrical and Computer Engineering University of Wisconsin Madison khoram@wisc.edu Jing Li Department of Electrical and Computer Engineering University of Wisconsin Madison jli@ece.wisc.edu |
| Pseudocode | Yes | Algorithm 1 Quantization of a parameter; Algorithm 2 Adaptive Quantization; Algorithm 3 Choosing hyper-parameters |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | We use MNIST (Le Cun et al., 1998), CIFAR-10 (Krizhevsky & Hinton, 2009), and SVHN (Netzer et al., 2011) benchmarks in our experiments. |
| Dataset Splits | No | The paper refers to a 'training set' for loss calculation and uses standard benchmarks, but does not explicitly provide specific percentages or counts for training, validation, and test splits, nor does it detail how these splits are managed for reproducibility. |
| Hardware Specification | Yes | We implement the proposed quantization on Intel Core i7 CPU (3.5 GHz) with Titan X GPU performing training and quantization. |
| Software Dependencies | No | The paper does not list specific software dependencies (e.g., libraries, frameworks, or programming languages) with version numbers that would be necessary for reproduction. |
| Experiment Setup | Yes | In our experiments in the next section Scale and Steps are set to 1.1 and 20, respectively. |