Alleviating Imbalanced Pseudo-label Distribution: Self-Supervised Multi-Source Domain Adaptation with Label-specific Confidence

Authors: Shuai Lü, Meng Kang, Ximing Li

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate S3DA-LC on several benchmark datasets, indicating its superior performance compared with the existing MSDA baselines.
Researcher Affiliation Academia 1Key Laboratory of Symbolic Computation and Knowledge Engineering (Jilin University), Ministry of Education, China 2College of Computer Science and Technology, Jilin University, China 3College of Software, Jilin University, China lus@jlu.edu.cn, kangmeng20@mails.jlu.edu.cn, liximing86@gmail.com
Pseudocode Yes Algorithm 1 The full training process of S3DA-LC
Open Source Code Yes The implementation is available at https://github.com/MengKang98/S3DA-LC.
Open Datasets Yes Datasets. In the experiments, we evaluate S3DA-LC on 3 benchmark datasets: (1) Office-31 [Saenko et al., 2010] contains 31 classes and 4,652 images unevenly spreading in three visual domains Amazon (A), DSLR (D), Webcam (W); (2) Office-Home [Venkateswara et al., 2017] contains 65 classes and about 15,500 images from 4 domains: Art (Ar), Clipart (Cl), Product (Pr) and Real-World (Rw); (3) Domain Net [Peng et al., 2019] contains 345 classes and over 600K images from 6 domains: Clipart (Clp), Infograph (Inf), Painting (Pnt), Quickdraw (Qdr), Real (Rel) and Sketch (Skt).
Dataset Splits No The paper describes the use of labeled source domains for training and unlabeled target domains for adaptation and evaluation, which is standard for MSDA. However, it does not explicitly provide specific train/validation/test splits (e.g., percentages or counts) within these domains or refer to specific pre-defined splits for reproducibility of data partitioning beyond the inherent source/target domain division.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU specifications, or cloud computing instance types used for running the experiments.
Software Dependencies No The paper mentions using Adam optimizer and ResNet backbones, but does not specify software versions for programming languages, libraries (e.g., PyTorch, TensorFlow), or other dependencies.
Experiment Setup Yes Implementation details. We adopt Res Net-50 [He et al., 2016] as the backbone network for Office-31 and Office-Home, and adopt Res Net-101 [He et al., 2016] as the backbone network for Domain Net. We employ a single fully connected layer as the classifier, i.e., domain specific classifier, for each source domain. We use the Adam optimizer with the learning rate 10 5 and weight decay 5 10 4. For S3DA-LC, we set τ to 0.9 for all datasets and the sensitivity analysis of parameters will be discussed later. We follow the common setting for Tukey s fences [Tukey, 1977] and set λ to 1.5 for all datasets.