Exploiting Diverse Characteristics and Adversarial Ambivalence for Domain Adaptive Segmentation

Authors: Bowen Cai, Huan Fu, Rongfei Jia, Binqiang Zhao, Hua Li, Yinghui Xu6850-6858

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method (DCAA) on various adaptation scenarios where the target images vary in weather conditions. The comparisons against baselines and the state-of-the-art approaches demonstrate the superiority of DCAA over the competitors.
Researcher Affiliation Collaboration Bowen Cai1,2, Huan Fu1, Rongfei Jia1, Binqiang Zhao1, Hua Li2 and Yinghui Xu1 1Alibaba Group 2Institute of Computing Technology, Chinese Academy of Sciences
Pseudocode No The paper describes its methods using textual explanations and figures, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No Our codes will be made public available.
Open Datasets Yes We evaluate our proposed approach on three challenging adaptation scenarios, i.e., GTA5 Cityscapes (Richter et al. 2016; Cordts et al. 2016; Sakaridis et al. 2018; Hu et al. 2019), SYNTHIA Cityscapes (Ros et al. 2016), and GTA5 BDD100K (Yu et al. 2020a).
Dataset Splits Yes Cityscapes-Cloudy provides 3,475 images with a resolution of 2048 1024, which are officially split into 2,975 training images and 500 validation images.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments.
Software Dependencies No The paper does not provide specific software dependency details with version numbers in the main text, stating that implementation details are in the supplemental materials.
Experiment Setup Yes The full objective for our CGST can be expressed as: LCGST = Lc GAN + Lcls + λsc Lsc, (4) where the trade-off parameter λsc is set to 5.0 in our paper. In our paper, D is set to 256, and L = 19 is the number of semantic categories. We set λp to 0.6 to generate more pseudo-labels for target images benefiting from the adversarial ambivalence mechanism.