DaDA: Distortion-aware Domain Adaptation for Unsupervised Semantic Segmentation
Authors: Sujin Jang, Joohan Na, Dokwan Oh
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results highlight the effectiveness of our approach over state-of-the-art methods under unknown relative distortion across domains. We present extensive experimental results to validate our distortion-aware domain adaptation (Da DA) framework for semantic segmentation in the presence of both visual and geometric domain shifts. |
| Researcher Affiliation | Industry | Sujin Jang Samsung Advanced Institute of Technology s.steve.jang@samsung.com Joohan Na Samsung Advanced Institute of Technology joohan.na@samsung.com Dokwan Oh Samsung Advanced Institute of Technology dokwan.oh@samsung.com |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | Datasets and more information are available at https://sait-fdd.github.io/. This statement does not explicitly mention that the source code for the methodology is provided at the URL. |
| Open Datasets | Yes | The Cityscapes dataset contains..., The GTAV dataset contains..., The Woodscape dataset consists of..., all followed by citations. Woodscape [38], Cityscapes [8], and GTAV [29] are commonly used public datasets. |
| Dataset Splits | Yes | We use front and rear camera scenes containing 4, 023 images in our experiments. The images are randomly split into a training set with 3, 023 images and a validation set with 1, 000 images. We randomly pulled 974 of validation images and remaining 2, 923 images are used for the training. |
| Hardware Specification | Yes | All of our codes are written in Py Torch and trained on a single NVidia RTX A6000 GPU with 48 GB memory. |
| Software Dependencies | No | All of our codes are written in Py Torch. While PyTorch is mentioned, a specific version number is not provided, which is required for reproducibility. |
| Experiment Setup | Yes | We trained all networks with the Adam [18] solver with a batch size of 4. The learning rate is 0.2 10 5 for M and DM and 0.1 10 6 for G and DG. We set the weight factors of losses in Eq.(6) as: β1 = 100.0, β2 = 10.0, β3 = 10.0 for Cityscapes Woodscape (or FDD); and β1 = 100.0, β2 = 1.0, β3 = 100.0 for GTAV Woodscape (or FDD). |