Object-Aware Domain Generalization for Object Detection

Authors: Wooju Lee, Dasol Hong, Hyungtae Lim, Hyun Myung

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the robustness of our method against out-of-distribution. We also conduct ablation studies to verify the effectiveness of proposed modules. and Table 1 shows the performance of the state-of-the-art models on clean and corrupted domains.
Researcher Affiliation Academia Urban Robotics Lab, School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Republic of Korea {dnwn24, ds.hong, shapelim, hmyung}@kaist.ac.kr
Pseudocode No The paper describes its methods through text and figures but does not contain any structured pseudocode or explicitly labeled algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/Wooju Lee24/OA-DG.
Open Datasets Yes Cityscapes-C (Michaelis et al. 2019) is a test benchmark to evaluate object detection robustness to corrupted domains. and Diverse Weather Dataset (DWD) is an urban-scene detection benchmark to assess object detection robustness to various weather conditions. DWD collected data from BDD-100k (2020), Foggy Cityscapes (2018), and Adverse Weather (2020) datasets.
Dataset Splits No The paper mentions using Cityscapes-C and Diverse Weather Dataset (DWD) which are standard benchmarks, and specifies training on "daytime-sunny" for DWD. However, it does not explicitly provide the specific percentages or sample counts for training, validation, and test splits used in their experiments, nor does it explicitly mention a validation set beyond what might be inherent in the benchmark usage.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers (e.g., PyTorch 1.9, CUDA 11.1) needed to replicate the experiment.
Experiment Setup Yes Temperature scaling parameter τ for contrastive loss is set to 0.06. We set λ and γ to 10 and 0.001. and Temperature scaling hyperparameter τ is set to 0.07. We set λ and γ to 10 and 0.001, respectively, for Faster R-CNN.