SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation
Authors: Wanqing Zhu, Jia-Li Yin, Bo-Hao Chen, Ximeng Liu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on various benchmark datasets demonstrate the state-of-the-art performance of SRo UDA where it achieves significant model robustness improvement without harming clean accuracy. |
| Researcher Affiliation | Academia | 1 Fujian Province Key Laboratory of Information Security and Network Systems, Fuzhou 350108, China 2 College of Computer Science and Big Data, Fuzhou University, Fuzhou 350108, China 3 Department of Computer Science and Engineering, Yuan Ze University, Taiwan |
| Pseudocode | Yes | Algorithm 1: Meta self-training for robust unsupervised domain adaptation (SRo UDA) |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | Yes | We evaluate our method on the both main-stream UDA benchmark datasets and AT datasets: 1) Office-31 dataset, which is a standard domain adaptation dataset with three domains: Amazon (A, 2,817 images), Webcam (W, 795 images), and DSLR (D, 498 images), It is imbalanced across domains. 2) Digits dataset containing 3 different domains: MNIST (M), USPS (U), and SVHN (S). 3) CIFAR and STL datasets. Both datasets contain 10 categories, of which the overlapping categories are 9 categories. |
| Dataset Splits | No | The paper uses standard benchmark datasets but does not explicitly provide specific train/validation/test dataset splits (e.g., percentages or exact counts) for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'the popular UDA codebase DALIB' and 'the Adam optimizer' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | During the pre-training of source model, we adopt the training settings of the popular UDA codebase DALIB and train the source model for 20 epochs with a learning rate of 0.004. For the following meta self-training stage, we iteratively update the source and target models. In the AT process, we set kmax = 10, ϵ = 8/255 in adversarial example generation, the Adam optimizer with learning rate 0.0015 is used to update the target model. We update the source model every epoch in this process. During both the pre-training and meta self-training processes, we also adopt the widely used data augmentation, including random flipping, and rotation for avoiding overfitting. |