Robustness via Uncertainty-aware Cycle Consistency

Authors: Uddeshya Upadhyay, Yanbei Chen, Zeynep Akata

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare our model with a wide variety of state-of-the-art methods on various challenging tasks including unpaired image translation of natural images, using standard datasets, spanning autonomous driving, maps, facades, and also in medical imaging domain consisting of MRI. Experimental results demonstrate that our method exhibits stronger robustness towards unseen perturbations in test data. In this section, we first describe our experimental setup and implementation details.We compare our model to a wide variety of state-of-the-art methods quantitatively and qualitatively. Finally we provide an ablation analysis to study the rationale of our model formulation.
Researcher Affiliation Academia Uddeshya Upadhyay1 Yanbei Chen1 Zeynep Akata1,2 1University of Tübingen 2Max Planck Institute for Intelligent Systems
Pseudocode No The paper describes methods in narrative text but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Code is released here: https: //github.com/Explainable ML/Uncertainty Aware Cycle Consistency.
Open Datasets Yes We evaluate on four standard datasets used for image-to-image translation: (i) Cityscapes [19] contains street scene images with segmentation maps, including 2,975 training and 500 validation and test images; (ii) Google maps [4] contains 1,096 training and test images scraped from Google maps with aerial photographs and maps; (iii) CMP Facade [20] contains 400 images from the CMP Facade Database including architectural facades labels and photos. (iv) IXI [21] is a medical imaging dataset with 15,000/5,000/10,000 training/test/validation images, including T1 MRI and T2 MRI.
Dataset Splits Yes Cityscapes [19] contains street scene images with segmentation maps, including 2,975 training and 500 validation and test images; IXI [21] is a medical imaging dataset with 15,000/5,000/10,000 training/test/validation images, including T1 MRI and T2 MRI.
Hardware Specification No The paper does not specify the exact hardware (e.g., GPU/CPU models, memory) used for running the experiments. It only vaguely mentions 'limited compute'.
Software Dependencies No The paper mentions using 'Adam optimizer' but does not provide specific version numbers for software dependencies such as deep learning frameworks (e.g., PyTorch, TensorFlow) or their required versions.
Experiment Setup Yes All the networks were trained using Adam optimizer [58] with a mini-batch size of 2. The initial learning rate was set to 2e 4 and cosine annealing was used to decay the learning rate over 1000 epochs. The hyper-parameters, (λ1, λ2) (Eq. (12)) were set to (10, 2).