Confidence Score for Source-Free Unsupervised Domain Adaptation
Authors: Jonghyun Lee, Dahuin Jung, Junho Yim, Sungroh Yoon
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluated our proposed methods, the JMDS score and Co WA-JMDS, on three public UDA benchmarks: Office31 (Saenko et al., 2010), Office-Home (Venkateswara et al., 2017), and Vis DA-2017 (Peng et al., 2017). More details are provided in the Appendix E. |
| Researcher Affiliation | Collaboration | 1Data Science and AI Lab., Seoul National University 2AIRS Company, Hyundai Motor Group, Seoul, Korea 3 Department of ECE and Interdisciplinary Program in AI, Seoul National University. |
| Pseudocode | Yes | Algorithm 1 Known/unknown classification; Algorithm 2 Class estimation; Algorithm 3 Co WA-JMDS. |
| Open Source Code | Yes | The code is available at https://github.com/Jhyun17/Co WAJMDS. |
| Open Datasets | Yes | We evaluated our proposed methods, the JMDS score and Co WA-JMDS, on three public UDA benchmarks: Office31 (Saenko et al., 2010), Office-Home (Venkateswara et al., 2017), and Vis DA-2017 (Peng et al., 2017). |
| Dataset Splits | No | The paper mentions training models and evaluating performance but does not specify explicit training/validation/test dataset splits (e.g., percentages or exact counts) for reproducibility. |
| Hardware Specification | No | The paper mentions 'GPUs' in the introduction as a resource for training models but does not provide specific hardware details such as GPU models (e.g., NVIDIA A100, Tesla V100), CPU models, or memory specifications used for running experiments. |
| Software Dependencies | No | The paper mentions using 'Res Net-50 or Res Net-101' as backbone networks, 'label smoothing', 'weight normalization', 'batch normalization', and 'mini-batch SGD', but it does not specify version numbers for any software libraries (e.g., Python, PyTorch, TensorFlow) or specialized packages. |
| Experiment Setup | Yes | We use mini-batch SGD with momentum 0.9 and weight decay 1e-3 for all experiments. Learning rate of a bottleneck layer is 1e-2 and the remainders are 1e-3 for all three datasets: the Office-31, Office-Home, Vis DA-2017 datasets. We do not use learning rate decay and the number of epochs are 50, 30, and 15, respectively. We set batch size 64 for all three benchmark datasets. The hyperparameter of weight Mixup α is set to 0.2, 0.2, and 2.0, respectively. The threshold τ for class estimation in a partial-set scenario is set to 0.3 for the Office-Home dataset. |