Prototypical Partial Optimal Transport for Universal Domain Adaptation

Authors: Yucheng Yang, Xiang Gu, Jian Sun

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on four benchmarks show that our method outperforms the previous state-of-the-art Uni DA methods. In experiments, we evaluate our method on four Uni DA benchmarks. Experimental results show that our method performs favorably compared with the state-of-the-art methods for Uni DA.
Researcher Affiliation Academia Yucheng Yang*, Xiang Gu*, Jian Sun School of Mathematics and Statistics, Xi an Jiaotong University, Xi an, China {ycyang, xianggu}@stu.xjtu.edu.cn, jiansun@xjtu.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/ycyangxjtu/PPOT.
Open Datasets Yes Datasets. Office-31 (Saenko et al. 2010) includes 4652 images in 31 categories from 3 domains: Amazon (A), DSLR (D), and Webcam (W). Office-Home (Venkateswara et al. 2017) consists of 15500 images in 65 categories... Vis DA (Peng et al. 2017) is a larger dataset... Domain Net (Peng et al. 2019) is one of the most challenging datasets...
Dataset Splits No The paper describes the datasets and how evaluation is performed (e.g., accuracy for all target samples, H-score), but it does not provide specific train/validation/test dataset split percentages or sample counts for any of the datasets used.
Hardware Specification Yes We implement our method using Pytorch (Paszke et al. 2019) on a single Nvidia RTX A6000 GPU.
Software Dependencies No The paper mentions 'Pytorch' and 'Moco V2' but does not provide specific version numbers for these software dependencies, which is required for reproducibility.
Experiment Setup Yes In training phase, we optimize the model using Nesterov momentum SGD with momentum of 0.9 and weight decay of 5 10 4. Following (Ganin and Lempitsky 2015), the learning rate decays with the factor of (1 + αt) β, where t linearly changes from 0 to 1 in training, and we set α = 10, β = 0.75. The batch size is set to 72 in all experiments except in Domain Net tasks where it is changed to 256. We train our model for 5 epochs (1000 iterations per epoch)... The initial learning rate is set to 1 10 4 on Office-31, 5 10 4 on Office-Home and Vis DA, and 0.01 on Domain Net.