Variational Model Perturbation for Source-Free Domain Adaptation
Authors: Mengmeng Jing, Xiantong Zhen, Jingjing Li, Cees Snoek
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on several source-free benchmarks under three different evaluation settings verify the effectiveness of the proposed variational model perturbation for source-free domain adaptation. |
| Researcher Affiliation | Academia | Mengmeng Jing1,2 , Xiantong Zhen2 , Jingjing Li1, Cees G. M. Snoek2; 1University of Electronic Science and Technology of China 2University of Amsterdam |
| Pseudocode | No | The paper describes its methods using prose and mathematical equations but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/mmjing/Variational_Model_Perturbation. |
| Open Datasets | Yes | The Office [47] dataset includes 3 domains. ... Office-Home [48] consists of 4 domains. ... CIFAR10-C, CIFAR100-C [49] and Image Net-C [49] |
| Dataset Splits | Yes | Generalized SFDA [13]: we split the source data into 80% and 20% parts. In the source pre-training phase, we use the labeled 80% part to pre-train the source model. In the target adaptation phase, we use all the unlabeled target data to adapt the model. In the testing phase, we predict the remaining 20% source data and all target data. |
| Hardware Specification | No | The paper specifies network architectures used (e.g., Res Net-50, Wide Res Net-28, Res Ne Xt-29) but does not provide details on the specific hardware (e.g., GPU model, CPU type) used for experiments. |
| Software Dependencies | No | The paper mentions optimizers (Stochastic Gradient Descent, Adam) and network architectures but does not provide specific version numbers for software libraries or dependencies (e.g., PyTorch 1.x, TensorFlow 2.x). |
| Experiment Setup | Yes | As for the optimizer, following [7], we employ Stochastic Gradient Descent with weight decay 1e-3 and momentum 0.9. As for the learning rate, we set 2e-3 for the backbone model and 2e-2 for the bottleneck layer newly added in SHOT. ... β is set to 0.3 in both Office and Office-Home. The batch size is 64. |