SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation
Authors: Uiwon Hwang, Jonghyun Lee, Juhyeon Shin, Sungroh Yoon
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We performed experiments on challenging benchmark datasets, including Vis DA (Peng et al., 2017), Domain Net (Peng et al., 2019), Point DA (Qin et al., 2019), and Vis DA-RSUT (Li et al., 2021). We verified that our method outperforms existing state-of-the-art methods on 2D image, 3D point cloud, and highly imbalanced datasets. |
| Researcher Affiliation | Academia | Uiwon Hwang1 Jonghyun Lee2 Juhyeon Shin3 Sungroh Yoon2,3, 1 Division of Digital Healthcare, Yonsei University 2 Department of Electrical and Computer Engineering, Seoul National University 3 Interdisciplinary Program in Artificial Intelligence, Seoul National University |
| Pseudocode | Yes | Algorithm 1 Adaptation procedure of SF(DA)2 |
| Open Source Code | Yes | Code is available in Supplementary Material. |
| Open Datasets | Yes | In this section, we evaluate the performance of SF(DA)2 on several benchmark datasets: Office31 (Saenko et al., 2010), Vis DA (Peng et al., 2017), Domain Net (Peng et al., 2019), Point DA10 (Qin et al., 2019), and Vis DA-RSUT (Li et al., 2021). |
| Dataset Splits | No | The paper does not explicitly provide training/test/validation dataset splits (e.g., percentages or counts) needed to reproduce the experiment, beyond mentioning the total training and test images for Point DA-10 as part of the dataset description. |
| Hardware Specification | Yes | Experiments are conducted on a NVIDIA A40. |
| Software Dependencies | No | The paper mentions network architectures (Res Net-50, Res Net-101, Point Net) and optimizers (SGD, Adam) but does not provide specific version numbers for software libraries or frameworks like Python, PyTorch, or TensorFlow. |
| Experiment Setup | Yes | Most hyperparameters of our method do not require heavy tuning. We set K to 5 on Vis DA, Point DA-10, and Vis DA-RSUT, and 2 on Domain Net. We set α1 to 1e-4 on Vis DA, Domain Net, and Point DA-10, and 1e-3 on Vis DA-RSUT. We set α2 to 10 on Vis DA, Point Net-10, and Vis DA-RSUT, and 1 on Domain Net. We adopt SGD with momentum 0.9 and train 15 epochs for Vis DA, Domain Net, and Vis DA-RSUT. We adopt Adam (Kingma & Ba, 2014) and train 100 epochs for Point DA-10. We set batch size to 64 except for Domain Net, where we set it to 128 for a fair comparison. We set the learning rate for Vis DA and Vis DA-RSUT to 1e-4, 5e-5 for Domain Net, and 1e-6 for Point DA-10, except for the last two layers. Learning rates for the last two layers are increased by a factor of 10, except for Point Net-10 where they are increased by a factor of 2 following NRC (Yang et al., 2021a). |