Universal Domain Adaptation through Self Supervision

Authors: Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Kate Saenko

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial, and partial domain adaptation settings. Implementation is available at https://github.com/Vision Learning Group/DANCE.
Researcher Affiliation Collaboration Kuniaki Saito1 Donghyun Kim1 Stan Sclaroff1 Kate Saenko1,2 1Boston University 2MIT-IBM Watson AI Lab [keisaito,dohnk,sclaroff,saenko]@bu.edu
Pseudocode No No structured pseudocode or algorithm blocks found.
Open Source Code Yes Implementation is available at https://github.com/Vision Learning Group/DANCE.
Open Datasets Yes As the most prevalent benchmark dataset, we use Office [32], which has three domains (Amazon (A), DSLR (D), Webcam (W)) and 31 classes. The second benchmark dataset Office Home (OH) [40] contains four domains and 65 classes. The third dataset Vis DA (VD) [30] contains 12 classes from two domains: synthetic and real images. We provide an analysis of varying the number of classes using Caltech [14] and Image Net [8] because these datasets contain a large number of classes.
Dataset Splits Yes We follow the settings of CDAN [25] for closed (CDA), SAN [2] for partial (PDA), STA [23] for open-set (ODA), and UAN [43] for open-partial domain adaptation (OPDA) in our experiments.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) are mentioned for running experiments. The paper states 'All experiments are implemented in Pytorch [29]. We employ ResNet50 [17] pre-trained on Image Net [8] as the feature extractor in all experiments.' but no hardware specs.
Software Dependencies No All experiments are implemented in Pytorch [29]. We employ Res Net50 [17] pre-trained on Image Net [8] as the feature extractor in all experiments. No specific version numbers for Pytorch or other libraries are provided.
Experiment Setup Yes We set λ in Eq. 9 as 0.05 and m in Eq. 7 as 0.5 for our method. For all comparisons, we use the same hyper-parameters, batch-size, learning rate, and checkpoint. The analysis of sensitivity to hyper-parameters is discussed in the supplementary.