Joint Adversarial Learning for Domain Adaptation in Semantic Segmentation

Authors: Yixin Zhang, Zilei Wang6877-6884

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The extensive experiments on two widely used benchmarks show that our method can bring considerable performance improvement over different baseline methods, which well demonstrates the effectiveness of our method in the output space adaptation.
Researcher Affiliation Academia Yixin Zhang, Zilei Wang Department of Automation, University of Science and Technology of China zhyx12@mail.ustc.edu.cn, zlwang@ustc.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release) for the source code of the methodology.
Open Datasets Yes Datasets. We use the popular synthetic-2-real domain adaptation set-ups, e.g., GTA5 Cityscapes and SYNTHIA Cityscapes. Cityscapes (Cordts et al. 2016) is a real-world dataset... GTA5 (Richter et al. 2016)... SYNTHIA (Ros et al. 2016)...
Dataset Splits Yes Training set of 2975 images is involved in the training phase. In both set-ups, 500 images of Cityscapes validation set are employed to evaluation.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for the experiments.
Software Dependencies No The paper mentions software components like "SGD optimizer", "Adam optimizer", "Deep Labv2", "VGG16", "Res Net101", "DCGAN", and "ASPP" but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes SGD optimizer with learning rate 0.00025, momentum 0.9 and weight decay 0.0001. Domain discriminators are trained by Adam optimizer with learning rate 0.0001. As for weight transfer module, we use SGD optimizer with learning rate 0.0001, momentum 0.9 and weight decay 0. For hyper-parameters in Eq. (10), We set λ1 adv=0.0002, λ2 adv=0.001, λ1 seg=0.1, and λ2 seg=1.0 following Adapt Seg Net. We also set λ1 adv wtm = 0.0002 and λ2 adv wtm = 0.001.