Distilling Image Classifiers in Object Detectors

Authors: Shuxuan Guo, Jose M. Alvarez, Mathieu Salzmann

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on several detectors with different backbones demonstrate the effectiveness of our approach, allowing us to outperform the state-of-the-art detector-to-detector distillation methods.
Researcher Affiliation Collaboration Shuxuan Guo1, 2 Jose M. Alvarez2 Mathieu Salzmann1 1CVLab, EPFL, Lausanne 1015, Switzerland 2NVIDIA, Santa Clara, CA 95051, USA
Pseudocode No The paper describes the proposed algorithms mathematically and textually but does not include formal pseudocode blocks or algorithm listings.
Open Source Code Yes Our code is avlaible at: https://github.com/NVlabs/DICOD.
Open Datasets Yes All models are trained and evaluated on MS COCO2017 [23], which contains over 118k images for training and 5k images for validation (minival) depicting 80 foreground object classes.
Dataset Splits Yes All models are trained and evaluated on MS COCO2017 [23], which contains over 118k images for training and 5k images for validation (minival) depicting 80 foreground object classes.
Hardware Specification Yes All object detectors are trained in their default settings on Tesla V100 GPUs.
Software Dependencies No Our implementation is based on MMDetection [6] with Pytorch [29]. While software is mentioned, specific version numbers for MMDetection or PyTorch are not provided.
Experiment Setup Yes All object detectors are trained in their default settings on Tesla V100 GPUs. The SSDs follows the basic training recipe in MMDetection [6]. The lightweight Faster RCNNs are trained with a 1 training schedule for 12 epochs. ... We use a Res Net50 with input resolution 112 112 as classification teacher for all student detectors. ... the Faster RCNN-R50 and Retina Net-R50 are trained with a 2 schedule for 24 epochs. ... Ablation studies for hyperparameters are also provided.