Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for Semantic Segmentation

Authors: KwanYong Park, Sanghyun Woo, Inkyu Shin, In So Kweon

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our solution on standard benchmark GTA5 to C-driving, and achieved new state-of-the-art results. To empirically verify the efficacy of our proposals, we conduct extensive ablation studies.
Researcher Affiliation Academia Kwanyong Park, Sanghyun Woo, Inkyu Shin, In So Kweon Korea Advanced Institute of Science and Technology (KAIST) {pkyong7,shwoo93,dlsrbgg33,iskweon77}@kaist.ac.kr
Pseudocode No The paper describes its proposed algorithm verbally and with diagrams (Figure 1, Figure 2) but does not provide pseudocode or a formal algorithm block.
Open Source Code No The paper does not provide any explicit links or statements regarding the availability of its source code.
Open Datasets Yes In our adaptation experiments, we take GTA5 [33] as the source domain, while the BDD100K dataset [41] is adopted as the compound ( rainy , snowy , and cloudy ) and open domains ( overcast ) (i.e., C-Driving [23]).
Dataset Splits No The paper mentions training schemes like "For the short training scheme (5K iteration)" and "For the longer training scheme (150K iteration)" but does not specify dataset splits (e.g., training, validation, test percentages or sample counts) needed for reproduction.
Hardware Specification No The paper uses a pre-trained VGG-16 [36] as backbone network but does not specify the hardware (e.g., GPU model, CPU type) used for training or inference.
Software Dependencies No The paper mentions using LS GAN [27] for Adapt-step training and an ImageNet-pretrained VGG model, but it does not specify version numbers for any software, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes Here, we use λGAN = 1, λsem = 10, λStyle = 10, λOut = 0.01, λtask = 1. For the short training scheme (5K iteration), we follow the same experimental setup of [23]. For the longer training scheme (150K iteration), we use LS GAN [27] for Adapt-step training.