Cross-Domain Grouping and Alignment for Domain Adaptive Semantic Segmentation
Authors: Minsu Kim, Sunghun Joung, Seungryong Kim, JungIn Park, Ig-Jae Kim, Kwanghoon Sohn1799-1807
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings. |
| Researcher Affiliation | Academia | Minsu Kim1, Sunghun Joung1, Seungryong Kim2, Jungin Park1, Ig-Jae Kim3, Kwanghoon Sohn1 1 Yonsei University 2 Korea University 3 Korea Institute of Science and Technology (KIST) |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | For experiments, we use the GTA5 (Richter et al. 2016) and SYNTHIA (Ros et al. 2016) as source dataset... We use Cityscapes (Cordts et al. 2016) as target dataset... |
| Dataset Splits | Yes | We use Cityscapes (Cordts et al. 2016) as target dataset, which consists of 2,975, 500 and 1,525 images with training, validation and test set. We train our network with training set, while evaluation is done using validation set. |
| Hardware Specification | Yes | The proposed method was implemented in Py Torch library (Paszke et al. 2017) and simulated on a PC with a single RTX Titan GPU. |
| Software Dependencies | No | The paper mentions implementation using 'Py Torch library (Paszke et al. 2017)' but does not specify a version number for PyTorch or any other software dependencies with their versions. |
| Experiment Setup | Yes | To train the segmentation network, we utilize stochastic gradient descent (SGD) (1998), where the learning rate is set to 2.5 10 4. For grouping network, we utilize SGD, with learning rate as 1 10 3. Both learning rates decreased with poly learning rate policy with power fixed to 0.9 and momentum as 0.9. For discriminator training, we use Adam (2014) optimizer with an initial learning rate 1 10 4. We jointly train our segmentation network, grouping network, and discriminator using (7) for a total of 120k iterations. We randomly paired source and target images in each iteration. Through the cross-validation using grid-search in log-scale, we set the hyper-parameters λco, λorth, λcadv, λcl and τ as 0.001, 0.001, 0.001, 0.0001 and 0.05, respectively. |