ClimbQ: Class Imbalanced Quantization Enabling Robustness on Efficient Inferences

Authors: Ting-An Chen, De-Nian Yang, Ming-syan Chen

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on classimbalanced and benchmark balanced datasets reveal that Climb Q outperforms the state-of-the-art quantization techniques, especially on highly imbalanced data.
Researcher Affiliation Academia Ting-An Chen1,2, De-Nian Yang2,3 Ming-Syan Chen1,3 1Graduate Institute of Electrical Engineering, National Taiwan University, Taiwan 2Institute of Information Science, Academia Sinica, Taiwan 3Research Center for Information Technology Innovation, Academia Sinica, Taiwan
Pseudocode No The paper describes procedures in text and uses diagrams, but does not contain a formally structured pseudocode or algorithm block.
Open Source Code Yes Code is available at https://github.com/tinganchen/Climb Q.git.
Open Datasets Yes We evaluate the effectiveness of Climb Q and Climb Q+ (Climb Q with Homo Var loss) on the class-imbalanced datasets, Syndigit-LT [37], CIFAR-10-LT [38] and CIFAR100-LT [38]. A parameter setting γ determines the degree of imbalance in the datasets. ... Benchmark balanced dataset. Although we are primarily concerned with imbalanced data, we compare with the baseline quantization approaches on the benchmark dataset Image Net-ILSVRC 2012 [39].
Dataset Splits No We train at imbalance ratios of 10, 50, and 200 and validate on the balanced testing data (i.e., γ = 1) to fairly evaluate the performance of each class. The paper mentions evaluating on 'balanced testing data' but does not specify a separate 'validation split' with percentages or sample counts for hyperparameter tuning during training.
Hardware Specification Yes We utilize an NVIDIA Tesla V100 GPU and an NVIDIA GTX 2080Ti for implementation.
Software Dependencies No The paper mentions implementing the work but does not provide specific version numbers for software dependencies like Python, PyTorch, or other libraries.
Experiment Setup Yes Image Net has a batch size of 512, while Syndigits-LT, CIFAR-10-LT, and CIFAR-100-LT have 128. The maximum number of training epochs is 200. The range of the learning rate is from 0.01 to 0.1. The significance level α referring to Theorem 3.2 is set to 0.05. The constant factor β in Eq. (4) is set to 0.999.