Joint Partial Optimal Transport for Open Set Domain Adaptation
Authors: Renjun Xu, Pelen Liu, Yin Zhang, Fang Cai, Jindong Wang, Shuoying Liang, Heting Ying, Jianwei Yin
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments Now we evaluate our method with state-of-the-art domain adaptation approaches on several benchmark datasets. 4.1 Setup Digits contains three standard digit classification datasets: MNIST [Lecun et al., 1998], USPS [Hull, 2002], and SVHN [Netzer et al., 2011]. ... 4.3 Result and Discussion The classification accuracy results on the Office-31 dataset for open set domain adaptation based on Res Net-50 are shown in Table 1. ... 4.4 Ablation Study To tooth apart the separate contributions of center-based optimal transport strategy and discriminative domain alignment, we compare JPOT with OSBP [Saito et al., 2018], Deep JDOT [Damodaran et al., 2018] and a variant of JPOT using the t-SNE embeddings [Donahue et al., 2014] of the last-layer features on transfer task MNIST USPS in Figures 3(a)-3(h). |
| Researcher Affiliation | Collaboration | Renjun Xu1 , Pelen Liu1 , Yin Zhang1 , Fang Cai2 , Jindong Wang3 , Shuoying Liang1 , Heting Ying4 , Jianwei Yin1 1Zhejiang University 2Stanford University 3Microsoft Research 4China Jiliang University |
| Pseudocode | No | The paper describes the methodology and algorithms using mathematical formulations and descriptive text, but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement regarding the release of its source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Digits contains three standard digit classification datasets: MNIST [Lecun et al., 1998], USPS [Hull, 2002], and SVHN [Netzer et al., 2011]. ... Office-31 [Saenko et al., 2010] is a standard dataset in computer vision for domain adaptation... Office-Home [Venkateswara et al., 2017] is a more challenging domain adaptation dataset... |
| Dataset Splits | No | The paper states "report adaptation results on the test sets" and "We use all labeled source examples and unlabeled examples for training." However, it does not specify explicit percentages or sample counts for training, validation, and test splits, nor does it refer to specific, predefined splits with sufficient detail for full reproducibility. |
| Hardware Specification | No | The paper mentions using Res Net-50 and Le Net architectures but does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using "Adam optimizer" and fine-tuning with "Res Net-50" and "Le Net" but does not specify any software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch, or specific library versions). |
| Experiment Setup | Yes | All the mentioned deep learning methods are trained with Adam optimizer. And the model is trained on 256-sized batches totally with N = 128 samples from each domain. We use mini-batch SGD with momentum to 0.9 and the same learning rate strategy in [Saito et al., 2018]. Following previous work [Herath et al., 2019], m is set as 0.4 in the whole experiment. Note that the parameter η is set as 0.1. As for trade-off hyper-parameters α and β, we select α = 0.02 and β = 0.05 for all transfer tasks. |