Amplitude Spectrum Transformation for Open Compound Domain Adaptive Semantic Segmentation

Authors: Jogendra Nath Kundu, Akshay R Kulkarni, Suvaansh Bhambri, Varun Jampani, Venkatesh Babu Radhakrishnan1220-1227

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We thoroughly evaluate the proposed approach against stateof-the-art prior works in the Open Compound DA setting. Datasets. Following Gong et al. (2021), we used the synthetic GTA5 (Richter et al. 2016) and SYNTHIA (Ros et al. 2016) datasets as the source. [...] We use the mean intersectionover-union (m Io U) metric for evaluating the performance. [...] Ablation Study. Table 5 presents a detailed ablation to underline the equal importance of AST-Sim and AST-Norm.
Researcher Affiliation Collaboration Jogendra Nath Kundu1*, Akshay R Kulkarni1*, Suvaansh Bhambri1*, Varun Jampani2, Venkatesh Babu Radhakrishnan1 1Indian Institute of Science, Bangalore 2Google Research
Pseudocode Yes Algorithm 1: Pseudo-code for the proposed approach
Open Source Code Yes 1Project page: https://sites.google.com/view/ast-ocdaseg
Open Datasets Yes Datasets. Following Gong et al. (2021), we used the synthetic GTA5 (Richter et al. 2016) and SYNTHIA (Ros et al. 2016) datasets as the source. [...] Further, we use Cityscapes (Cordts et al. 2016), KITTI (Geiger et al. 2013) and Wild Dash (Zendel et 2018) datasets as extended open domains to test generalization on diverse unseen domains.
Dataset Splits No The paper mentions using GTA5 and SYNTHIA as source datasets and C-Driving, Cityscapes, KITTI, and Wild Dash as target/open domains, but it does not explicitly specify a separate validation split or how it was used.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No Following Park et al. (2020); Gong et al. (2021), we employ Deep Labv2 (Chen et al. 2017a) with a VGG16 (Simonyan et al. 2015) backbone as the CNN segmentor. We use SGD optimizer with a learning rate of 1e-4, momentum of 0.9 and a weight decay of 5e-4 during training. While software components like Deep Labv2 and VGG16 are mentioned, no specific version numbers for these or other libraries (e.g., Python, PyTorch/TensorFlow) are provided.
Experiment Setup Yes We use SGD optimizer with a learning rate of 1e-4, momentum of 0.9 and a weight decay of 5e-4 during training. We also use a polynomial decay with power 0.9 as the learning rate scheduler. Following Park et al. (2020), we use two training schemes for GTA5 i.e., short training scheme with 5k iterations and long training scheme with 150k iterations.