Unknown-Aware Domain Adversarial Learning for Open-Set Domain Adaptation

Authors: JoonHo Jang, Byeonghu Na, Dong Hyeok Shin, Mingi Ji, Kyungwoo Song, Il-chul Moon

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we evaluate UADAL on the benchmark datasets, which shows that UADAL outperforms other methods with better feature alignments by reporting state-of-the-art performances.
Researcher Affiliation Collaboration Joon Ho Jang KAIST adkto8093@kaist.ac.kr Byeonghu Na KAIST wp03052@kaist.ac.kr Dong Hyeok Shin KAIST tlsehdgur0@kaist.ac.kr Mingi Ji KAIST qwertgfdcvb@kaist.ac.kr now at Google (mingiji@google.com) Kyungwoo Song University of Seoul kyungwoo.song@uos.ac.kr Il-Chul Moon KAIST, Summary.AI icmoon@kaist.ac.kr
Pseudocode Yes Algorithm 1 in Appendix B.3.1 enumerates the detailed training procedures of UADAL.
Open Source Code Yes The code will be publicly available on https://github.com/JoonHo-Jang/UADAL.
Open Datasets Yes We utilized several benchmark datasets. Office-31 [22] consists of three domains: Amazon (A), Webcam (W), and DSLR (D) with 31 classes. Office-Home [30] is a more challenging dataset with four different domains: Artistic (A), Clipart (C), Product (P), and Real-World (R), containing 65 classes. Vis DA [21] is a large-scale dataset from synthetic images to real one, with 12 classes. (In the self-assessment, point 4d states: 'We mentioned that we use the public benchmark dataset, in Section 4.1.')
Dataset Splits Yes In terms of the class settings, we follow the experimental protocols by [23]. (In the self-assessment, point 3b states: 'Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes]') This implies that data splits, including those for validation, are specified in the paper or supplementary materials.
Hardware Specification Yes All the experiments were conducted on a single GPU (NVIDIA A100). (In the self-assessment, point 3d states: 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] The details in Appendix C.1.1.')
Software Dependencies No Our implementation is based on the PyTorch framework [16]. The paper mentions PyTorch but does not provide a specific version number or details for other software dependencies with version numbers.
Experiment Setup Yes For the training, we use SGD optimizer with momentum 0.9, learning rate initialized by 0.01, and decay by 0.1 at 80% and 90% of total training iterations. Batch size is set to 32. The total iterations is 10,000 for Office-31 and 20,000 for Office-Home and VisDA. The initial fitting of Eq. (20) trains G and E for 500 iterations.