Dual Relation Knowledge Distillation for Object Detection

Authors: Zhen-Liang Ni, Fukui Yang, Shengzhao Wen, Gang Zhang

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method achieves state-of-the-art performance, which improves Faster R-CNN based on Res Net50 from 38.4% to 41.6% m AP and improves Retina Net based on Res Net50 from 37.4% to 40.3% m AP on COCO 2017. ... 4 Experimental and Results
Researcher Affiliation Collaboration Zhen-Liang Ni1 , Fukui Yang2 , Shengzhao Wen2 , Gang Zhang2 1 Institute of Automation, Chinese Academy of Sciences 2 Department of Computer Vision Technology (VIS), Baidu Inc.
Pseudocode Yes Algorithm 1 Dual Relation Knowledge Distillation
Open Source Code No The paper does not contain any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes COCO2017 [Lin et al., 2014] is used to evaluate our method, which is a challenging dataset in object detection. It contains 120k images and 80 object classes.
Dataset Splits No The paper uses COCO2017 for evaluation and mentions training strategies like "12 epochs" or "24 epochs", but it does not explicitly provide specific train/validation/test dataset splits or percentages.
Hardware Specification Yes All experiments are performed on 8 Tesla P40 GPUs.
Software Dependencies No The paper mentions using 'SGD' as an optimizer but does not provide specific version numbers for any software, libraries, or frameworks.
Experiment Setup Yes The batch size is set to 16. The initial learning rate is 0.02. The momentum is set to 0.9 and the weight decay is 0.0001. Unless specified, the ablation experiment usually adopts 1 learning schedule and the comparison experiment with other methods adopts 2 learning schedule.