Adversarial Dropout Regularization
Authors: Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada, Kate Saenko
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments, we demonstrate the benefit of ADR over existing domain adaptation approaches, achieving state-of-the-art results in difficult domain shifts. |
| Researcher Affiliation | Academia | Kuniaki Saito1, Yoshitaka Ushiku1, Tatsuya Harada1,2, and Kate Saenko3 1The University of Tokyo, 2RIKEN, 3Boston University |
| Pseudocode | No | The paper describes its training procedure in Section 3.2 using prose and equations, but does not provide any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing its source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We use MNIST (Le Cun et al. (1998)), SVHN (Netzer et al. (2011)) and USPS datasets and follow the protocol of unsupervised domain adaptation used by (Tzeng et al. (2017)). |
| Dataset Splits | Yes | We used the validation domain (55,400 images) as our target domain in an unsupervised domain adaptation setting. |
| Hardware Specification | No | The paper mentions "GPU memory limitations" ("Due to the limit of GPU memory") when discussing batch sizes, but does not provide any specific hardware details such as GPU models, CPU types, or memory amounts used for the experiments. |
| Software Dependencies | No | The paper mentions software components like "scikit-learn" and "Adam" (optimizer), but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | The number of iterations for Step 3 was fixed at n = 4. We used Adam (Kingma & Ba (2014)) for optimizer and set the learning rate to 2.0 10 4, a value commonly reported in the GAN literature. |