Bidirectional Domain Mixup for Domain Adaptive Semantic Segmentation
Authors: Daehan Kim, Minseok Seo, Kwanyong Park, Inkyu Shin, Sanghyun Woo, In So Kweon, Dong-Geol Choi
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present experimental results to validate the proposed BDM for domain adaptive semantic segmentation. We first describe experimental configurations in detail. After that, we validate our BDM on two public benchmark datasets, GTA5 and SYNTHIA (Ros et al. 2016) datasets, and provide detailed analyses. Note that the Intersectionover-Union (Io U) metric is used for all the experiments. |
| Researcher Affiliation | Collaboration | 1 Hanbat National University, Korea 2 SI Analytics, Korea 3 Korea Advanced Institute of Science and Technology (KAIST), Korea |
| Pseudocode | Yes | Algorithm 1: Bidirectional Domain Mixup |
| Open Source Code | Yes | Visit our project page with the code at https://sites.google.com/view/bidirectional-domain-mixup. |
| Open Datasets | Yes | We evaluate our proposed Bidirectional Domain Mixup on two popular domain adaptive semantic segmentation benchmarks(SYNTHIA Cityscapes, and GTA5 Cityscapes). Cityscapes (Cordts et al. 2016) is a real-world urban scene dataset... SYNTHIA (Ros et al. 2016) is a synthetic urban scene dataset... GTA5 (Richter et al. 2016) dataset is another synthetic dataset... |
| Dataset Splits | Yes | Cityscapes (Cordts et al. 2016) is a real-world urban scene dataset consisting of a training set with 2,975 images, a validation set with 500 images and a testing set with 1,525 images. |
| Hardware Specification | No | The paper does not specify any particular hardware components like GPU or CPU models used for the experiments. It lacks details such as 'NVIDIA A100' or 'Intel Xeon'. |
| Software Dependencies | No | The paper mentions using 'Deeplabv2 architecture with pre-trained Res Net101' and links to 'official implementation' of warm-up models (footnotes 1-4), but it does not specify software dependencies with version numbers (e.g., 'Python 3.x', 'PyTorch 1.x', 'CUDA 11.x'). |
| Experiment Setup | Yes | In all our experiments, we used Deeplabv2 (Chen et al. 2017) architecture... all hyperparameter settings such as batch size, learning rate, and iteration follow the standard protocol (Tsai et al. 2018). ... Given the warm-up model, we further train the model with proposed BDM framework for 1,000,000 iterations. ...we set the number of patches W and H as 4 and 3. For the number of randomly generated boxes in cut process, we choose 4 as default. ...the number of pseudo-label reliability intervals, R, was set to 3 in all experiments. |