Using latent space regression to analyze and leverage compositionality in GANs

Authors: Lucy Chai, Jonas Wulff, Phillip Isola

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we investigate regression into the latent space as a probe to understand the compositional properties of GANs. We find that combining the regressor and a pretrained generator provides a strong image prior, allowing us to create composite images from a collage of random image parts at inference time while maintaining global consistency. To compare compositional properties across different generators, we measure the trade-offs between reconstruction of the unrealistic input and image quality of the regenerated samples. We conduct experiments to quantify this independence effect. Our method extends to a number of related applications, such as image inpainting or example-based image editing, which we demonstrate on several GANs and datasets. Using pre-trained Progressive GAN (Karras et al., 2017) and Style GAN2 (Karras et al., 2019b) generators, we conduct experiments on Celeb A-HQ and FFHQ faces and LSUN cars, churches, living rooms, and horses to investigate the compositional properties that GANs learn from data.
Researcher Affiliation Academia Lucy Chai, Jonas Wulff & Phillip Isola MIT CSAIL, Cambridge, MA 02139, USA {lrchai,wulff,phillipi}@mit.edu
Pseudocode No The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Code is available on our project page: https://chail.github.io/latent-composition/.
Open Datasets Yes Using pre-trained Progressive GAN (Karras et al., 2017) and Style GAN2 (Karras et al., 2019b) generators, we conduct experiments on Celeb A-HQ and FFHQ faces and LSUN cars, churches, living rooms, and horses to investigate the compositional properties that GANs learn from data.
Dataset Splits No The paper does not explicitly provide specific training/validation/test dataset splits. It states that the encoder is trained by sampling 'z randomly from the latent distribution' and using 'a pretrained generator G to obtain the target image x = G(z)', but doesn't specify how this generated data is split for training/validation. For evaluation, it mentions using '50k samples' for FID or '200 collages' for density/L1, but these are not defined as specific splits of larger datasets.
Hardware Specification No The paper mentions 'Training takes from two days to about a week on a single GPU', but does not specify the model or type of GPU used for the experiments.
Software Dependencies No The paper mentions the use of 'Adam optimizer (Kingma & Ba, 2014)' and 'Res Net backbone (Res Net-18 for Pro GAN, and Res Net-34 for Stylegan; He et al. (2016))', but does not provide specific version numbers for software libraries like PyTorch/TensorFlow, Python, or other key dependencies.
Experiment Setup Yes We train the encoders using a Res Net backbone (Res Net-18 for Pro GAN, and Res Net-34 for Stylegan; He et al. (2016)). The encoders are trained with the Adam optimizer (Kingma & Ba, 2014) with learning rate lr = 0.0001. For Pro GAN encoders, we use batch size 16 for the 256 resolution generators, and train for 500K batches. For the 1024 resolution generator, we use batch size 4 and 400K batches. We train the Style GAN encoders for 680k batches (256 and 512 resolution) or 580k batches (1024 resolution), and add identity loss (Richardson et al., 2020) with weight λ = 1.0 on the FFHQ encoder.