ICDA: Illumination-Coupled Domain Adaptation Framework for Unsupervised Nighttime Semantic Segmentation

Authors: Chenghao Dong, Xuejing Kang, Anlong Ming

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our method reduces the complex domain gaps and achieves state-of-the-art performance for nighttime semantic segmentation. Our code is available at https://github.com/chenghao Dong666/ICDA.
Researcher Affiliation Academia Chenghao Dong , Xuejing Kang , Anlong Ming School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications {chdong, kangxuejing, mal}@bupt.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/chenghao Dong666/ICDA.
Open Datasets Yes City Scapes[Cordts et al., 2016] is a large dataset of urban street scenes with pixel-level annotations of 19 semantic categories. ... Dark Zurich[Sakaridis et al., 2019] is the mainly used unsupervised nighttime semantic segmentation dataset... BDD100k-night[Sakaridis et al., 2020; Yu et al., 2020].
Dataset Splits Yes During the training, we only use its training set which contains 2,975 images as the source domain. ... The dataset also contains another 201 annotated nighttime images, including 50 images for validation and 151 for testing.
Hardware Specification Yes The whole framework is implemented using Py Torch on a single RTX 3080-Ti GPU.
Software Dependencies No The paper mentions: "The whole framework is implemented using Py Torch..." but does not specify a version number for PyTorch or any other software dependencies, which is required for reproducibility.
Experiment Setup Yes We use the Adam W[Loshchilov and Hutter, 2019] as the optimizer with a weight decay of 0.01. The base learning rate is 6 × 10−5 for the encoder, DAR and 6 × 10−4 for the decoder. We use the linear learning rate warmup strategy with twarm = 1.5k and ttotal = 40k. After warmup iterations, the learning rate is decreased using the poly policy with a power of 0.9. The batch size is set to 2 for each domain. Following refign[Bruggemann et al., 2023], we apply random cropping with a crop size of 512 for the source domain pair, and a crop size of 960 first, then 512 for the target domain pair.