ClimateGAN: Raising Climate Change Awareness by Generating Images of Floods

Authors: Victor Schmidt, Alexandra Luccioni, Mélisande Teng, Tianyu Zhang, Alexia Reynaud, Sunand Raghupathi, Gautier Cosne, Adrien Juraver, Vahe Vardanyan, Alex Hernández-García, Yoshua Bengio

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we describe the details of our framework, thoroughly evaluate the main components of our architecture and demonstrate that our model is capable of robustly generating photo-realistic flooding on street images.
Researcher Affiliation Academia 1Mila Qu ebec AI Institute, Montr eal, Canada 2Universit e de Montr eal, Montr eal, Canada 3Columbia University, New York City, USA 4CDRIN, Matane, Canada
Pseudocode No The paper describes the model architecture and procedures using text and diagrams, but does not provide formal pseudocode or algorithm blocks.
Open Source Code No The paper provides a link to a publicly available *dataset* ('We make this data set publicly available1 to enable further research. 1https://github.com/cc-ai/mila-simulated-floods') but does not explicitly state that the *source code for their methodology* (Climate GAN model) is open-source or provided via a link.
Open Datasets Yes Overall, we gathered approximately 20,000 images from 2,000 different viewpoints in the simulated world, which we used to train the Masker. We make this data set publicly available1 to enable further research. 1https://github.com/cc-ai/mila-simulated-floods ... We also included images without floods of typical streets and houses, aiming to cover a broad scope of geographical regions and types of scenery: urban, suburban and rural, with an emphasis on images from the Cityscapes Cordts et al. (2016) and Mapillary Neuhold et al. (2017) data sets.
Dataset Splits No The paper mentions training data and a test set, but does not explicitly provide details for a validation dataset split (e.g., percentages or counts).
Hardware Specification No The paper mentions 'GPU hours' and 'GPU usage' when discussing carbon footprint, but it does not specify any particular GPU models, CPU models, or other specific hardware configurations used for running the experiments.
Software Dependencies No The paper mentions various software components and models (e.g., Unity3D engine, Deep Labv3+, HRNet, Cycle GAN, SPADE, WGAN), but it does not provide specific version numbers for any of them, which is necessary for reproducible software dependencies.
Experiment Setup Yes The Masker s final loss sums the losses of the three decoders: LMasker = LDepth+LSeg +LMask. ... We trained the Painter on the 1200 real flooded images, inferring m from pseudo labels of water segmented by a pre-trained Deep Labv3+ model. ... we limited this procedure to the first ten epochs of training. ... We used the scale-invariant loss from Mi Da S (Lasinger et al., 2019)... In the simulated domain, we used a binary cross-entropy loss LBCE(yms, ms) with the groundtruth mask yms. ... As per the DADA approach, we also added an entropy minimization loss to increase the mask decoder s confidence in its real domain predictions... Similarly to the segmentation decoder, we adversarially trained the flood mask decoder with a WGAN loss LW GAN... According to Park et al. (2019), a perceptual VGG loss (Ledig et al., 2017) and a discriminator feature-matching loss (Salimans et al., 2016) are essential for good performance.