Progressive Graph Learning for Open-Set Domain Adaptation
Authors: Yadan Luo, Zijian Wang, Zi Huang, Mahsa Baktashmotlagh
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three standard open-set benchmarks evidence that our approach significantly outperforms the state-of-the-arts in open-set domain adaptation. |
| Researcher Affiliation | Academia | Yadan Luo * 1 Zijian Wang * 1 Zi Huang 1 Mahsa Baktashmotlagh 1 School of Information Technology and Electrical Engineering, The University of Queensland, Australia. Correspondence to: Yadan Luo <lyadanluol@gmail.com>. |
| Pseudocode | No | The paper describes the methodology in prose and mathematical equations but does not provide a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | Py Torch implementation of our approach is avaibale in an annonymized repository1. 1https://github.com/BUser Name/PGL. The provided URL is a placeholder and does not offer concrete access to the source code. |
| Open Datasets | Yes | We applied our method on three challenging open-set object recognition benchmarks, i.e., the Office-Home, Vis DA-17, and Syn2Real-O. Office-Home (Venkateswara et al., 2017) is a challenging domain adaptation benchmark... Vis DA-17 (Peng et al., 2017) is a cross-domain dataset... Syn2Real-O (Peng et al., 2018) is the most challenging synthetic-to-real testbed... |
| Dataset Splits | Yes | Following the same splits used in (Liu et al., 2019), we select the first 25 classes in alphabetical order as the known classes, and group the rest of the classes as the unknown. Following the same protocol used in (Saito et al., 2018; Liu et al., 2019), we construct the known set with 6 categories and group the remaining 6 categories as the unknown set. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models or types of computing resources used for experiments. |
| Software Dependencies | No | Py Torch implementation of our approach is avaibale in an annonymized repository1. While PyTorch is mentioned, no version number or other specific software dependencies with their versions are listed. |
| Experiment Setup | Yes | The networks are trained with the ADAM optimizer with a weight decay of 5 10 5. The learning rate is initialized as 1 10 4 and 1 10 5 for the GNNs and the backbone module respectively, and then decayed by a factor of 0.5 every 4 epochs. The dropout rate is fixed to 0.2 and the depth of GNN L is set to 1 for all experiments. The loss coefficients γ and µ are empirically set to 0.4 and 0.3, respectively. The detailed experimental settings for the three datasets are summarized in Table 1, where B being the batch size, α the enlarging factor, β the hyperparameter for balancing the openness. |