Diffusion-Based Probabilistic Uncertainty Estimation for Active Domain Adaptation
Authors: Zhekai Du, Jingjing Li
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on both ADA and Source-Free ADA settings show that our method provides more calibrated predictions than previous ADA methods and achieves favorable performance on three domain adaptation datasets. |
| Researcher Affiliation | Academia | Zhekai Du, Jingjing Li School of Computer Science and Engineering University of Electronic Science and Technology of China zhekaid@std.uestc.edu.cn, lijin117@yeah.net |
| Pseudocode | Yes | Algorithm 1 Pseudo code of DAPM-TT for ADA |
| Open Source Code | Yes | Code is available at https://github.com/TL-UESTC/DAPM. |
| Open Datasets | Yes | We evaluate our method on three widely used domain adaptation benchmarks, i.e., Office-31 [54], Office-Home [55] and Vis DA [56]. |
| Dataset Splits | No | The paper describes an active learning process where labeled data is acquired in rounds for training, but it does not specify a separate, fixed validation dataset split in the conventional sense for hyperparameter tuning or early stopping. |
| Hardware Specification | Yes | All experiments are conducted on a single RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions implementing the method on "Pytorch and Mind Spore" but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | In the adaptation stage, we utilize the SGD optimizer with a learning rate of 0.01, momentum of 0.9, and weight decay of 0.001. We set the EMA rate for the teacher model to 0.99... For Office-31 and Office-Home, we conduct adaptation for 5 epochs and train the diffusion classifier for 10 epochs in each training round. The total number of training rounds is 20. For Vis DA, we set the epoch number in each stage to 1, and the total number of training rounds is 10... The batch size is 32 for ADA and 64 for SFADA. |