Finding an Unsupervised Image Segmenter in each of your Deep Generative Models

Authors: Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, Andrea Vedaldi

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on five segmentation datasets across twelve different GANs demonstrate the effectiveness and generalizability of our approach.
Researcher Affiliation Academia University of Oxford {lukemk,chrisr,iro,av}@robots.ox.ac.uk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Figure 2 provides a pipeline diagram, not pseudocode.
Open Source Code Yes We upload code to the Supplementary Material to fully reproduce all experiments. This code contains a README file with a detailed description of the code structure, which should help enable others to reproduce and later extend upon our work.
Open Datasets Yes To demonstrate the efficacy of our method across resolutions and datasets, we implement both, GANs trained on Image Net (Deng et al., 2009) at a resolution of 128px, and GANs trained on the smaller Tiny Image Net dataset (100,000 images split into 200 classes) at a resolution of 64px. All experiments performed across all GANs utilize the same set of hyperparameters for both optimization and segmentation.
Dataset Splits No The paper mentions using standard evaluation datasets (e.g., CUB, Flowers) but does not specify explicit validation splits for these datasets. For training, they use GAN-generated data, which is essentially infinite and doesn't require fixed splits.
Hardware Specification No The paper states, 'Our results do not require extremely large amounts of compute; they can be reproduced with a single GPU by researchers with computational constraints.' This is not specific enough to identify the hardware used.
Software Dependencies No The paper mentions using the Adam optimizer and UNet architecture, but does not provide specific version numbers for any software libraries or dependencies (e.g., PyTorch version, TensorFlow version).
Experiment Setup Yes We generate latent codes z N(0, 1) and optimize the vector vl (or vd) by gradient descent with the Adam (Kingma & Ba, 2014) optimizer and learning rate 0.05. We use λ = 5 for the light direction vl and λ = 5 for the dark direction vb. We perform 1000 optimization steps, by which point vl (or vd) has converged. [...] we train for 12000 steps using Adam with learning rate 10 3 and batch size 95, decaying the learning rate by a factor of 0.2 at iteration 8000.