Enhancing Semi-supervised Domain Adaptation via Effective Target Labeling
Authors: Jiujun He, Bin Liu, Guosheng Yin
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct thorough evaluations on three image-based benchmark datasets: Office-31 (Saenko et al. 2010), Office-Home (Venkateswara et al. 2017), and Domain Net (Peng et al. 2019). The results are reported in Tables 2 and 3. |
| Researcher Affiliation | Academia | 1Center of Statistical Research, School of Statistics, Southwestern University of Finance and Economics, Chengdu, China 2Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China |
| Pseudocode | Yes | Algorithm 1: Overview of our proposed learning framework and Algorithm 2: Non-maximal degree node suppression |
| Open Source Code | Yes | For more details, please refer to our code: https://github.com/BetterTMrR/EFTL-Pytorch-main. |
| Open Datasets | Yes | We conduct thorough evaluations on three image-based benchmark datasets: Office-31 (Saenko et al. 2010), Office-Home (Venkateswara et al. 2017), and Domain Net (Peng et al. 2019). |
| Dataset Splits | No | No explicit train/validation/test dataset splits (percentages or counts) or references to standard predefined splits are mentioned for the overall datasets used (Office-31, Office-Home, Domain Net). |
| Hardware Specification | No | No specific hardware details (GPU/CPU models, processor types, memory amounts) are mentioned for the experimental setup. |
| Software Dependencies | No | No specific software dependencies with version numbers are provided. The paper mentions 'Pytorch' in the GitHub link but without a version. |
| Experiment Setup | Yes | Since the numbers of the AEN and ADN in the directed graph construction depend on the scale of the dataset, we choose M1 = nt/(3 K) and M2 = M1/5 to adaptively construct the directed graph across all datasets. The trade-off hyperparameter α in Eq. (4) is fixed as 0.1. For baseline Fix MME, the threshold τ in Eq. (7) is set to be 0.85 for all datasets except 0.8 for Domain Net. Following Li et al. (2021b), we exploit a label smoothing technique with parameter 0.1 to avoid overconfident predictions when using a cross-entropy loss. We run our experiments three times with different random seeds independently. |