Style Adaptation and Uncertainty Estimation for Multi-Source Blended-Target Domain Adaptation
Authors: Yuwu Lu, Haoyu Huang, Xue Hu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on several challenging DA benchmarks, including the Image CLEF-DA, Office-Home, Vis DA 2017, and Domain Net datasets, demonstrate the superiority of our method over the state-of-the-art (SOTA) approaches. |
| Researcher Affiliation | Academia | Yuwu Lu , Haoyu Huang, and Xue Hu School of Artificial Intelligence, South China Normal University {luyuwu2008, hyhuang99, hx1430940232}@163.com |
| Pseudocode | Yes | Algorithm 1 SAUE for MBDA |
| Open Source Code | Yes | The source code of SAUE is provide in the Supplementary Material. |
| Open Datasets | Yes | Four standard benchmark datasets are used to validate the effectiveness of our proposed method. The Image CLEF-DA [40]... The Office-Home [41]... The Domain Net [14]... The Vis DA 2017 [42] dataset... |
| Dataset Splits | No | The paper mentions training data and test data implicitly through its use of benchmarks, but it does not specify a distinct validation set split or how it was used beyond 'training process'. |
| Hardware Specification | Yes | All experiments are run on a single Ge Force RTX-4090 GPU, and the batch size of both the source and blended-target domains are set to 32. |
| Software Dependencies | Yes | We utilize Py Torch framework [43] to perform our experiments; the Py Torch version is 1.13.1 and CUDA version is 11.7. |
| Experiment Setup | Yes | The optimizer is Stochastic Gradient Descent (SGD) with a momentum parameter of 0.9 and a weight decay of 1e-3. The learning rate is 1e-3 and updated by the Lambda LR [43] during the training process. All experiments are run on a single Ge Force RTX-4090 GPU, and the batch size of both the source and blended-target domains are set to 32. The hyper-parameters λe and λd, maximum iteration I, and mini-batch size B are also mentioned in Algorithm 1. |