DA-Ada: Learning Domain-Aware Adapter for Domain Adaptive Object Detection
Authors: Haochen Li, Rui Zhang, Hantao Yao, Xin Zhang, Yifan Hao, Xinkai Song, Xiaqing Li, Yongwei Zhao, Yunji Chen, Ling Li
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments over multiple DAOD tasks show that DA-Ada can efficiently infer a domain-aware visual encoder for boosting domain adaptive object detection. |
| Researcher Affiliation | Academia | 1Intelligent Software Research Center, Institute of Software, CAS, Beijing, China 2State Key Lab of Processors, Institute of Computing Technology, CAS, Beijing, China 3 State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, CAS, Beijing, China 4 University of Chinese Academy of Sciences, Beijing, China |
| Pseudocode | No | The paper provides block diagrams and mathematical equations in Figure 2, but does not present pseudocode or a clearly labeled algorithm block. |
| Open Source Code | Yes | Our code is available at https://github.com/Therock90421/DA-Ada. |
| Open Datasets | Yes | Cityscapes [9] contains diverse street scenes captured by a mobile camera in daylight. The regular partition consists of 2,975 training and 500 validation images annotated with eight classes. Foggy Cityscapes [54] simulates three distinct densities of fog on Cityscapes, containing 8,925 training images and 1,500 validation images. |
| Dataset Splits | Yes | Cityscapes [9] contains diverse street scenes captured by a mobile camera in daylight. The regular partition consists of 2,975 training and 500 validation images annotated with eight classes. Foggy Cityscapes [54] simulates three distinct densities of fog on Cityscapes, containing 8,925 training images and 1,500 validation images. |
| Hardware Specification | Yes | All experiments are deployed on 8 Tesla V100 GPUs. |
| Software Dependencies | No | The paper mentions using Region CLIP (ResNet-50) and Faster-RCNN, and the SGD optimizer, but does not provide specific version numbers for software libraries or programming languages used (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | The hyperparameter λdia, λdita, λdec is set to 0.1, 1.0 and 0.1, respectively. We set the batch size of each domain to 8 and use the SGD optimizer with a warm-up learning rate. Mean Average Precision (m AP) with a threshold of 0.5 is taken as the evaluation metric. |