Make RepVGG Greater Again: A Quantization-Aware Approach

Authors: Xiangxiang Chu, Liang Li, Bo Zhang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on detection and semantic segmentation tasks verify its generalization.
Researcher Affiliation Industry Xiangxiang Chu1, Liang Li1, Bo Zhang1 1Meituan {chuxiangxiang,liliang58,zhangbo97}@meituan.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No We will release the code to facilitate reproduction and future research.
Open Datasets Yes We mainly focus our experiments on Image Net dataset (Deng et al. 2009). And we verify the generalization of our method based on a recent popular detector YOLOv6 (Li et al. 2022), which extensively adopts the reparameterization design and semantic segmentation. We train and evaluate QARep VGG-fashioned YOLOv6 on the COCO 2017 dataset (Lin et al. 2014).
Dataset Splits Yes As shown in Table 1, Rep VGG-A0 serevely suffers from large accuracy drop (from 20% to 77% top-1 accuracy) on Image Net validation data-set after standard PTQ. Table 5: Classification results on Image Net validation dataset.
Hardware Specification Yes Rep VGG and QARep VGG versions are trained for 300 epochs on 8 Tesla-V100 GPUs.
Software Dependencies No As for PTQ, we use the Py Torch-Quantization toolkit (NVIDIA 2018), which is widely used in deployment on NVIDIA GPUs.
Experiment Setup Yes All models are trained for 10 epochs (the first three ones for warm-up) with an initial learning rate of 0.01. Rep VGG and QARep VGG versions are trained for 300 epochs on 8 Tesla-V100 GPUs. All models are trained using crop size of 512 1024.