Self-adaptive Re-weighted Adversarial Domain Adaptation

Authors: Shanshan Wang, Lei Zhang

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evidence demonstrates that the proposed model outperforms state of the arts on standard domain adaptation datasets. In this section, several benchmark datasets, not only the toy datasets as USPS+MNIST datasets, but also Office-31 dataset [Saenko et al., 2010], Image CLEF-DA [Long et al., 2017] dataset and Office-Home [Venkateswara et al., 2017] dataset, are adopted for evaluation.
Researcher Affiliation Academia Learning Intelligence & Vision Essential (Li VE) Group School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
Pseudocode No No formal pseudocode or algorithm blocks are provided. The methodology is described using mathematical equations and textual explanations.
Open Source Code No The paper does not provide an explicit statement about open-sourcing code or a link to a code repository.
Open Datasets Yes In this section, several benchmark datasets, not only the toy datasets as USPS+MNIST datasets, but also Office-31 dataset [Saenko et al., 2010], Image CLEF-DA [Long et al., 2017] dataset and Office-Home [Venkateswara et al., 2017] dataset, are adopted for evaluation. Handwritten Digits Datasets. USPS (U) and MNIST (M) datasets are toy datasets for domain adaptation. They are standard digit recognition datasets containing handwritten digits from 0–9. Office-31 Dataset. This dataset is a most popular benchmark dataset for cross-domain object recognition. Image CLEF-DA Dataset. The Image CLEF-DA is a benchmark for Image CLEF 2014 domain adaptation challenge. Office-Home Dataset. This is a new and challenging dataset for domain adaptation, which consists of 15,500 images from 65 categories coming from four significantly different domains: Artistic images (Ar), Clip Art (Cl), Product images (Pr) and Real-World images (Rw).
Dataset Splits No The paper mentions using standard datasets like USPS, MNIST, Office-31, Image CLEF-DA, and Office-Home, and refers to 'standard evaluation protocol of UDA [Long et al., 2017]' for unseen target labels. However, it does not explicitly provide specific training/validation/test split percentages or sample counts for reproduction, nor does it detail how validation sets were constructed or used beyond general protocol.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No Our implementation is based on the Py Torch framework. No specific version numbers for PyTorch or other software dependencies are provided.
Experiment Setup Yes Our implementation is based on the Py Torch framework. For the toy datasets of handwritten digits, we utilize the Le Net. For other datasets, we use the pre-trained Res Net-50 as backbone network. We adopt the progressive training strategies as in CDAN [Long et al., 2018]. In the process of selecting pseudo-labeled samples, the threshold T is empirically set as the constant 0.9. The margin m and N0 in triplet loss are set as 0.3 and 3 following the setting as usual, respectively.