Adversarial Style Mining for One-Shot Unsupervised Domain Adaptation
Authors: Yawei Luo, Ping Liu, Tao Guan, Junqing Yu, Yi Yang
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both cross-domain classification and segmentation benchmarks verify that ASM achieves state-of-the-art adaptation performance under the challenging one-shot setting. |
| Researcher Affiliation | Collaboration | 1School of Computer Science & Technology, Huazhong University of Science & Technology 2CCAI, Zhejiang University 3Baidu Research 4Re LER, University of Technology Sydney 5Institute of High Performance Computing, A*STAR, Singapore |
| Pseudocode | Yes | Algorithm 1: Adversarial Style Mining |
| Open Source Code | Yes | The code is publicly available at https://github. com/Royal Vane/ASM. |
| Open Datasets | Yes | We use MNIST [21]-USPS [16]-SVHN [35] benchmarks to evaluate ASM on one-shot cross domain classification task. For one-shot cross-domain segmentation task, we evaluate ASM on two benchmarks, i.e., SYNTHIA [37] Cityscapes [5] and GTA5 [36] Cityscapes. |
| Dataset Splits | No | We run each OSUDA experiment for 5 times to get the average result, where each time we randomly select one-shot sample from the target domain. The paper describes how the one-shot sample is selected but does not specify general train/validation/test splits for the full datasets used. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used, such as GPU or CPU models. |
| Software Dependencies | No | We evaluate ASM together with several state-of-the-art UDA algorithms on both classification and segmentation tasks using Paddle Paddle and Pytorch. The paper mentions frameworks but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | No | More details on experimental settings are given in Appendix A and B. The main text does not include specific hyperparameters or system-level training settings. |