G-NAS: Generalizable Neural Architecture Search for Single Domain Generalization Object Detection

Authors: Fan Wu, Jinling Gao, Lanqing Hong, Xinbing Wang, Chenghu Zhou, Nanyang Ye

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the S-DGOD urban-scene datasets demonstrate that the proposed G-NAS achieves SOTA performance compared to baseline methods. ... Experiments Experimental Setup Datasets. ... Ablation Study
Researcher Affiliation Collaboration 1 Shanghai Jiao Tong University, Shanghai, China 2 Huawei Noah s Ark Lab, Hong Kong, China
Pseudocode Yes Algorithm 1: G-NAS: Generalizable Neural Architecture Search for Single Domain Generalization Object Detection
Open Source Code Yes Codes are available at https://github.com/wufan-cse/G-NAS.
Open Datasets Yes To evaluate different methods single-domain generalization ability, we follow the setting proposed by Wu and Deng (2022). The dataset contains five urban-scene domains with distinct weather conditions, including Daytime-Sunny, Daytime-Foggy, Dusk-Rainy, Night-Sunny, and Night-Rainy.
Dataset Splits No Daytime-Sunny is the source training domain and the other four domains are only used for testing. ... In this paper, we use Ltrain to optimize α as the in-domain (i.d.) validation set is not suitable for S-DGOD as we aim to improve Oo D generalization ability instead of selecting models with optimal i.d. performance.
Hardware Specification No The paper only states 'All experiments are conducted on a computer with 8 GPUs.' without specifying the make, model, or type of GPUs or other hardware components.
Software Dependencies No The paper describes the optimizer (SGD) and training parameters but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, or specific library versions).
Experiment Setup Yes We train all models until full convergence for 12 epochs. We set the λg to 1.0. All parameters in our NAS framework are randomly initialized. We apply an SGD optimizer with the learning rate set to 0.02 and we set the batch size to 4 per GPU.