UFDA: Universal Federated Domain Adaptation with Practical Assumptions
Authors: Xinhui Liu, Zhenghao Chen, Luping Zhou, Dong Xu, Wei Xi, Gairui Bai, Yihan Zhao, Jizhong Zhao
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three benchmark datasets demonstrate that our method achieves comparable performance for our UFDA scenario with much fewer assumptions, compared to previous methodologies with comprehensive additional assumptions. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Xi an Jiaotong University, Xi an, China 2School of Electrical and Computer Engineering, The University of Sydney, Sydney, Australia 3Department of Computer Science, The University of Hong Kong, Hong Kong SAR, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. It provides a diagram (Figure 2) outlining the methodology but not pseudocode. |
| Open Source Code | No | The paper does not provide an explicit statement about the availability of its source code or a link to a code repository. |
| Open Datasets | Yes | Datasets. Office-Home (Venkateswara et al. 2017) is a DA benchmark that consists of four domains: Art (Ar), Clipart (Cl), Product (Pr), and Real World (Re). Office31 (Saenko et al. 2010) is another popular benchmark that consists of three domains: Amazon (A), Webcam (W), and Dslr (D). Vis DA2017+Image CLEF-DA is a combination of two datasets. Vis DA2017 (Peng et al. 2018) is a DA dataset where the source domain contains simulated images (S) and the target domain contains real-world images (R). Image CLEF-DA, on the other hand, is organized by selecting the common categories shared by three large-scale datasets: Image CLEF (C), Image Net (I), and Pascal VOC (P). |
| Dataset Splits | Yes | To follow the standard Uni MDA training protocol, we use the same source and target samples, network architecture, learning rate, and batch size as in the UMAN (Yin et al. 2022). In UFDA, each domain contains two types of labels: shared and unknown. We use a matrix to describe the specific Uni MDA setting, called UMDA-Matrix (Yin et al. 2022), which is defined as |C1| ... |CM| |C| |CS1| ... |CSM | |Ct| first row is the size of the shared class of all the domains, and the second row denotes the unknown class. The first m columns are the label set of the multi-source domains, and the last one denotes the target domain. In this way, Uni MDA settings can be determined by the division rule. To ensure a fair comparison with previous Uni MDA works, we maintain the same UMDA-Matrix settings with UMAN. |
| Hardware Specification | Yes | Furthermore, we implement all methods using Py Torch and conduct all experiments on an NVIDIA Ge Force GTX 4*2080Ti, utilizing the default parameters for each method. |
| Software Dependencies | No | The paper mentions "Py Torch" as an implementation framework but does not specify its version number or versions of any other key software dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | For model optimization, we employ stochastic gradient descent (SGD) training with a momentum of 0.9. The learning rate is decayed using the cosine schedule, starting from a high value (e.g., 0.005 for Office-31, Office-Home, and Vis DA2017+Image CLEF-DA) and decaying to zero. To follow the standard Uni MDA training protocol, we use the same source and target samples, network architecture, learning rate, and batch size as in the UMAN (Yin et al. 2022). In decentralized training, the number of communication rounds r plays a crucial role. To ensure a fair comparison with traditional Uni MDA works, we adopt r = 1 for all tasks. |