Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Divide and Contrast: Source-free Domain Adaptation via Adaptive Contrastive Learning
Authors: Ziyi Zhang, Weikai Chen, Hui Cheng, Zhen Li, Siyuan Li, Liang Lin, Guanbin Li
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on Vis DA, Office-Home, and the more challenging Domain Net have verified the superior performance of Da C over current state-of-the-art approaches. |
| Researcher Affiliation | Collaboration | Ziyi Zhang1, Weikai Chen3, Hui Cheng2, Zhen Li4,5, Siyuan Li6, Liang Lin2, Guanbin Li2 1National Key Laboratory of Novel Software Technology, Nanjing University, Nanjing, China 2School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China 3 Tencent America, 4 The Chinese University of Hong Kong, Shenzhen, China 5 Shenzhen Research Institute of Big Data, Shenzhen, China 6 AI Lab, School of Engineering, Westlake University, Hangzhou, China |
| Pseudocode | Yes | The overall algorithm of Da C is summarized in Appendix C. |
| Open Source Code | Yes | The code is available at https://github.com/Zye Zhang/Da C.git. |
| Open Datasets | Yes | We conduct experiments on three benchmark datasets: Office-Home [31], Vis DA-2017 [32], Domain Net [33]. |
| Dataset Splits | No | The paper mentions using benchmark datasets but does not explicitly provide details on how the training, validation, and test splits were performed for their experiments. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models or memory used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, or other libraries with versions). |
| Experiment Setup | Yes | The learning rate for the backbone is set as 2e-2 on Office-Home, 5e-4 on Vis DA, and 1e-2 on Domain Net. We train 30 epochs for Office-Home, 60 epochs for Vis DA, and 30 epochs for Domain Net. |