CyCADA: Cycle-Consistent Adversarial Domain Adaptation

Authors: Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, Trevor Darrell

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on a variety of visual recognition and prediction settings, including digit classification and semantic segmentation of road scenes, advancing state-of-the-art performance for unsupervised adaptation from synthetic to real world driving domains.
Researcher Affiliation Collaboration Judy Hoffman 1 Eric Tzeng 1 Taesung Park 1 Jun-Yan Zhu 1 Phillip Isola 1 2 Kate Saenko 3 Alexei A. Efros 1 Trevor Darrell 1 1EECS and BAIR, UC Berkeley 2Openai (work done while at UC Berkeley) 3CS Department, Boston University. Correspondence to: Judy Hoffman <jhoffman@eecs.berkeley.edu>.
Pseudocode No The paper describes the model and its components using equations and descriptive text, but it does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing the source code for the described methodology or provide a link to a code repository.
Open Datasets Yes We train our model using the training sets, MNIST (Le Cun et al., 1998) 60,000 images, USPS 7,291 images, standard SVHN train 73,257 images. ... For our synthetic source domain, we use the GTA5 dataset (Richter et al., 2016) extracted from the game Grand Theft Auto V, which contains 24966 images. We consider adaptation from GTA5 to the real-world Cityscapes dataset (Cordts et al., 2016), from which we used 19998 images without annotation for training and 500 images for validation.
Dataset Splits Yes We consider adaptation from GTA5 to the real-world Cityscapes dataset (Cordts et al., 2016), from which we used 19998 images without annotation for training and 500 images for validation.
Hardware Specification No The paper mentions 'larger GPU memory' in the context of memory constraints but does not specify any particular GPU models, CPU types, or other hardware specifications used for experiments.
Software Dependencies No The paper mentions various network architectures like 'Le Net', 'VGG16-FCN8s', and 'DRN-26' but does not provide specific software dependencies with version numbers (e.g., deep learning frameworks, libraries, or compilers).
Experiment Setup No The paper refers to 'Supplemental A.1.1 for full implementation details' and 'Appendix A.1.2' for implementation details, but these specific experimental setup details (e.g., hyperparameters, optimizer settings) are not present in the main body of the paper.