An Adversarial Perturbation Oriented Domain Adaptation Approach for Semantic Segmentation

Authors: Jihan Yang, Ruijia Xu, Ruiyu Li, Xiaojuan Qi, Xiaoyong Shen, Guanbin Li, Liang Lin12613-12620

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our approach achieves the state-of-the-art performance on two challenging domain adaptation tasks for semantic segmentation: GTA5 Cityscapes and SYNTHIA Cityscapes. Extensive experiments on GTA5 Cityscapes and SYNTHIA Cityscapes have verified the state-of-the-art performance of our method. We evaluate our method along with several state-of-the-art algorithms on two challenging synthesized-2-real UDA benchmarks, i.e., GTA5 Cityscapes and SYNTHIA Cityscapes. Table 1: Results of adapting GTA5 to Cityscapes.
Researcher Affiliation Collaboration 1School of Data and Computer Science, Sun Yat-sen University, China 2Tencent You Tu Lab, 3University of Oxford 4Dark Matter AI Research
Pseudocode No The paper describes the steps of its method in paragraph form (Step 1, Step 2, Step 3) but does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block with structured steps.
Open Source Code No The paper does not include any explicit statement about releasing open-source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes We evaluate our method along with several state-of-the-art algorithms on two challenging synthesized-2-real UDA benchmarks, i.e., GTA5 Cityscapes and SYNTHIA Cityscapes. Cityscapes is a real-world image dataset, consisting of 2,975 images for training and 500 images for validation. GTA5 contains 24,966 synthesized frames captured from the video game. SYNTHIA is a synthetic urban scenes dataset with 9,400 images. The paper cites these datasets: (Cordts et al. 2016), (Richter et al. 2016), (Ros et al. 2016).
Dataset Splits Yes Cityscapes is a real-world image dataset, consisting of 2,975 images for training and 500 images for validation.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or cloud instance specifications used for running its experiments.
Software Dependencies No The paper mentions 'PyTorch' as the implementation framework but does not specify its version number or the version numbers of any other key software libraries or solvers used.
Experiment Setup Yes During training, we use SGD (Bottou 2010) for G and C with momentum 0.9, learning rate 2.5 10 4 and weight decay 10 4. We use Adam (Kingma and Ba 2014) with learning rate 10 4 to optimize D. And we follow the polynomial annealing procedure (Chen et al. 2017a) to schedule the learning rate. When generating adversarial features, the iteration K of I-FGSPM is set to 3. Note that we set the ϵ1, ϵ2 and ϵ3 in Eq. (5) and (6) as 0.01, 0.002 and 0.011 separately. α1, α2 and α3 are 0.2, 0.002 and 0.0005 separately.