Domain Adaptation via Maximizing Surrogate Mutual Information
Authors: Haiteng Zhao, Chang Ma, Qinyu Chen, Zhi-Hong Deng
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our approach is comparable with state-of-the-art unsupervised adaptation methods on standard UDA tasks. Experiment results on three challenging benchmarks demonstrate that our method performs favorably against state-of-art class-level UDA models. In this section, We evaluate the proposed method on three public domain adaptation benchmarks, compared with recent state-of-the-art UDA methods. We conduct extensive ablation study to discuss our method. |
| Researcher Affiliation | Academia | Haiteng Zhao , Chang Ma , Qinyu Chen and Zhi-Hong Deng Peking University {zhaohaiteng,changma,chenqinyu,zhdeng}@pku.edu.cn |
| Pseudocode | No | No pseudocode or clearly labeled algorithm blocks were found. |
| Open Source Code | Yes | The code is available at https://github.com/zhaoht/SIDA. |
| Open Datasets | Yes | Vis DA-2017 [Peng et al., 2017] is a challenging benchmark for UDA with the domain shift from synthetic data to real imagery. It contains 152,397 training images and 55,388 validation images across 12 classes. Office-31 [Saenko et al., 2010] is a commonly used dataset for UDA, where images are collected from three distinct domains: Amazon (A), Webcam (W) and DSLR (D). Office-Home [Venkateswara et al., 2017] is another classical dataset with 15,500 images of 65 categories in office and home settings, consisting of 4 domains including Artistic images (A), Clip Art images (C), Product images (P) and Real-World images (R). |
| Dataset Splits | Yes | Vis DA-2017... It contains 152,397 training images and 55,388 validation images across 12 classes. Following the training and testing protocol in [Long et al., 2017a], the model is trained on labeled training and unlabeled validation set and tested on the validation set. We follow the standard protocol for UDA [Long et al., 2017b] to use all labeled source samples and all unlabeled target samples as the training data. |
| Hardware Specification | No | The paper mentions using ResNet-50 and ResNet-101 as backbone networks but does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or other library versions). |
| Experiment Setup | No | While the paper mentions aspects like using ResNet backbones and class-balanced sampling, it lacks specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) in the provided text. It refers to an appendix for details, which is not available. |