Deep Domain-Adversarial Image Generation for Domain Generalisation
Authors: Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, Tao Xiang13025-13032
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on four DG datasets demonstrate the effectiveness of our approach. |
| Researcher Affiliation | Collaboration | Kaiyang Zhou,1 Yongxin Yang,1 Timothy Hospedales,2,3 Tao Xiang1,3 1University of Surrey, 2University of Edinburgh, 3Samsung AI Center, Cambridge |
| Pseudocode | Yes | Algorithm 1 Deep Domain-Adversarial Image Generation |
| Open Source Code | Yes | The code is available at https://github.com/Kaiyang Zhou/DG-research-pytorch. |
| Open Datasets | Yes | MNIST (Le Cun et al. 1998), MNISTM (Ganin and Lempitsky 2015), SVHN (Netzer et al. 2011) and SYN (Ganin and Lempitsky 2015)... We randomly select 600 images per class from each dataset and split the data into 80% for training and 20% for validation. |
| Dataset Splits | Yes | We randomly select 600 images per class from each dataset and split the data into 80% for training and 20% for validation. |
| Hardware Specification | No | The paper mentions using ResNet18 and OSNet as backbones but does not specify any hardware details like GPU models, CPU types, or memory used for training or inference. |
| Software Dependencies | No | The paper states, "The code is based on Torchreid (Zhou and Xiang 2019)." However, it does not specify any version numbers for Torchreid or other software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | The networks are trained from scratch using SGD, initial learning rate of 0.05, batch size of 128 and weight decay of 5e-4 for 50 epochs. The learning rate is decayed by 0.1 every 20 epochs. |