SPA: A Graph Spectral Alignment Perspective for Domain Adaptation

Authors: Zhiqing Xiao, Haobo Wang, Ying Jin, Lei Feng, Gang Chen, Fei Huang, Junbo Zhao

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On standardized benchmarks, the extensive experiments of SPA demonstrate that its performance has surpassed the existing cutting-edge DA methods. Coupled with dense model analysis, we conclude that our approach indeed possesses superior efficacy, robustness, discriminability, and transferability. Code and data are available at: https://github.com/Crown X/SPA. We conduct extensive evaluations on several benchmark datasets including Domain Net, Office Home, Office31, and Vis DA2017. The exprimental results show that our method consistently outperforms existing state-of-the-art domain adaptation methods
Researcher Affiliation Collaboration Zhiqing Xiao13, Haobo Wang23, Ying Jin4, Lei Feng5, Gang Chen13, Fei Huang6, Junbo Zhao13 1 College of Computer Science and Technology, Zhejiang University 2 School of Software Technology, Zhejiang University 3 Key Lab of Intelligent Computing based Big Data of Zhejiang Province, Zhejiang University 4 CUHK-Sense Time Joint Lab, The Chinese University of Hong Kong 5 School of Computer Science and Engineering, Nanyang Technological University 6 Alibaba Group
Pseudocode No No explicit pseudocode or algorithm block was provided.
Open Source Code Yes Code and data are available at: https://github.com/Crown X/SPA.
Open Datasets Yes We conduct experiments on 4 benchmark datasets: 1) Office31 [63] is a widely-used benchmark for visual DA. ... 2) Office Home [76] is a challenging dataset ... 3) Vis DA2017 [58] is a large-scale benckmark ... 4) Domain Net [57] is a large-scale dataset
Dataset Splits Yes Following the standard protocols for unsupervised domain adaptation in previous methods [48, 56], we use the same backbone networks for fair comparisons. ... The reverse validation [47, 89] is conducted to select hyper-parameters. For both unsupervised domain adaptation (UDA) and semi-supervised domain adaptation (SSDA) scenarios, we fix the coefficient of Lnap as 0.2 and the coefficient of Lgsa as 1.0, while we will offer a sensitivity analysis for this two coefficients in the following section.
Hardware Specification Yes All experiments are conducted on a server with the following configurations: Operating System: Ubuntu 20.04.4 LTS CPU: Intel(R) Xeon(R) Platinum 8358P CPU @ 2.60GHz, 32 cores, 128 processors GPU: NVIDIA Ge Force RTX 3090
Software Dependencies No We use Py Torch and tllib toolbox [28] to implement our method and finetune Res Net pre-trained on Image Net [25, 26].
Experiment Setup Yes We adopt mini-batch stochastic gradient descent (SGD) with a momentum of 0.9, a weight decay of 0.005, and an initial learning rate of 0.01, following the same learning rate schedule in [48]. ... The learning rates of the layers trained from scratch are set to be 0.01. We use the the same learning rate schedule in [48, 52], including a learning rate scheduler with a momentum of 0.9, a weight decay of 0.005, the bottleneck size of 256, and batch size of 32.