Agile Multi-Source-Free Domain Adaptation

Authors: Xinyao Li, Jingjing Li, Fengling Li, Lei Zhu, Ke Lu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental By slightly tuning source bottlenecks, we achieve comparable or even superior performance on a challenging benchmark Domain Net with less than 3% trained parameters and 8 times of throughput compared with SOTA method. ... Table 1 we compare the performance and trainable parameters of CAi DA (Dong et al. 2021), PMTrans1 (Zhu, Bai, and Wang 2023) and our methods on a challenging benchmark Domain Net (Peng et al. 2019) with 6 domains. ... (4) Extensive experiments on three challenging benchmarks and detailed analysis demonstrates the success of our design.
Researcher Affiliation Academia 1University of Electronic Science and Technology of China (UESTC) 2Shenzhen Institute for Advanced Study, UESTC 3University of Technology Sydney 4School of Electronic and Information Engineering, Tongji University
Pseudocode No The paper describes the method using a framework diagram and mathematical equations, but it does not include a formal pseudocode or algorithm block.
Open Source Code Yes Code is available at https://github.com/TL-UESTC/Bi-ATEN.
Open Datasets Yes We evaluate our method on three MSFDA benchmarks Office-Home (Venkateswara et al. 2017), Office Caltech (Gong et al. 2012) and Domain Net (Peng et al. 2019).
Dataset Splits No The paper does not explicitly provide specific training, validation, and test dataset splits or cross-validation methodology. It only mentions the datasets used.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No Implementations are based on Mind Spore and Py Torch. However, specific version numbers for these software dependencies are not provided.
Experiment Setup Yes Hyperparameter analysis. Fig. 7 gives accuracies under different hyperparameters in Eq. (15) and Eq. (16). Results show that a large γ harms performance, which suggests that overly relying on pseudo labels misguides the weight learning process. For target domains with larger domain gap (target inf), a larger λ is needed to constrain the intra-domain weights to avoid negative transfer, as stated in ablation study. Optimal parameter combinations might vary across different target data, but the overall performance is relatively stable. ... Training Process We design an alternate training procedure for Bi-ATEN.