Clarinet: A One-step Approach Towards Budget-friendly Unsupervised Domain Adaptation
Authors: Yiyang Zhang, Feng Liu, Zhen Fang, Bo Yuan, Guangquan Zhang, Jie Lu
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that CLARINET significantly outperforms a series of competent baselines. |
| Researcher Affiliation | Academia | 1Shenzhen International Graduate School, Tsinghua University 2Centre for Artificial Intelligence, University of Technology Sydney |
| Pseudocode | Yes | Algorithm 1 CLARINET: One-step BFUDA Approach |
| Open Source Code | Yes | The code of CLARINET is available at github.com/Yiyang98/BFUDA. |
| Open Datasets | Yes | Based on five commonly used datasets: MNIST (M), USPS (U), SVHN (S), MNIST-M (m) and SYN-DIGITS (Y), we verify efficacy of CLARINET on 6 BFUDA tasks |
| Dataset Splits | No | The paper states it follows "standard protocols for unsupervised domain adaptation" but does not explicitly provide percentages or counts for training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions implementing methods "by PyTorch" but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The batch size is set to 128 and the number of epochs is set to 500. SGD optimizer (momentum = 0.9, weight decay = 5 10 5) is with an initial learning rate of 0.005 in adversarial network and 5 10 5 in classifier. In mapping function T, l is set to 0.5. |