DSD-DA: Distillation-based Source Debiasing for Domain Adaptive Object Detection

Authors: Yongchao Feng, Shiwei Li, Yingjie Gao, Ziyue Huang, Yanan Zhang, Qingjie Liu, Yunhong Wang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments have been conducted to manifest the effectiveness of this method, which consistently improves the strong baseline by large margins, outperforming existing alignment-based works.
Researcher Affiliation Academia 1State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China. 2Hangzhou Innovation Institute, Beihang University, Hangzhou, China. 3Zhongguancun Laboratory, Beijing, China. Correspondence to: Qingjie Liu <qingjie.liu@buaa.edu.cn>.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No Our implementation is based on Detectron2 and the publicly available code by AT and CMT.
Open Datasets Yes We conduct our experiments on four datasets, including (1) Cityscapes (Cordts et al., 2016) contains authentic urban street scenes captured under normal weather conditions, encompassing 2,975 training images and 500 validation images with pixel-level annotations. (2) Foggy Cityscapes (Sakaridis et al., 2018) is a derivative dataset that simulates dense foggy conditions based on Cityscapes, maintaining the same train/validation split and annotations. (3) KITTI (Geiger et al., 2012) is one popular dataset for autonomous driving including 7,481 labeled images for training. (4) SIM10k (Johnson-Roberson et al., 2016) is a synthetic dataset containing 10,000 images rendered from the video game Grand Theft Auto V (GTA5).
Dataset Splits Yes Cityscapes (Cordts et al., 2016) contains authentic urban street scenes captured under normal weather conditions, encompassing 2,975 training images and 500 validation images with pixel-level annotations.
Hardware Specification Yes Each experiment is conducted on 4 NVIDIA 3090 GPUs.
Software Dependencies No Our implementation is based on Detectron2 and the publicly available code by AT and CMT. The paper mentions 'Detectron2' as a base for implementation, but does not provide a specific version number for it or any other software dependencies.
Experiment Setup Yes In the DSD training stage, we resize all the cropped images to 224 224, and set Io U threshold T = 0.8. DA-Faster was trained with SGD optimizer with a 0.001 learning rate, 2 batch size, momentum of 0.9, and weight decay of 0.0005 for 70k iterations on 1 Nvidia GPU 2080Ti. ... The Adam W optimizer is employed for optimizing the classification-teacher model, initialized with a learning rate of 0.0001 for 12 epochs. We decay the learning rate by ratio 0.1 at epoch 9 and 11 and the total batch size is set to 64. ... TROLN is trained with SGD optimizer with a 0.005 learning rate, 2 batch size for 12 epochs and we decay the learning rate by ratio 0.1 at epoch 6 and 7.