Towards Dynamic-Prompting Collaboration for Source-Free Domain Adaptation
Authors: Mengmeng Zhan, Zongqian Wu, Rongyao Hu, Ping Hu, Heng Tao Shen, Xiaofeng Zhu
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on three benchmark datasets showcase the superiority of our framework over previous SOTA methods. |
| Researcher Affiliation | Academia | School of Computer Science and Engineering, University of Electronic Science and Technology of China |
| Pseudocode | No | The paper describes the methodology in text and provides figures, but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that source code for the described methodology is publicly available. |
| Open Datasets | Yes | We evaluate our proposed approach on three standard benchmarks in domain adaptation including Office-31 [Saenko et al., 2010], Office-Home [Venkateswara et al., 2017], and Domain Net [Peng et al., 2019]. |
| Dataset Splits | No | The paper mentions using training and target datasets but does not provide specific percentages or counts for training, validation, or test splits, nor does it refer to predefined validation splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Vi T-B/16' and 'CLIP based on Vi T-B/16' as models, and 'Stochastic Gradient Descent (SGD) optimizer', but it does not specify version numbers for any underlying software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | We optimize training objectives via the Stochastic Gradient Descent (SGD) [Zinkevich et al., 2010] optimizer, given a mini-batch size of 16, the momentum of 0.9, and weight decay ratio of 1 × 10−4, respectively. |