GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

Authors: Katja Schwarz, Yiyi Liao, Michael Niemeyer, Andreas Geiger

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We systematically analyze our approach on several challenging synthetic and real-world datasets. Our experiments reveal that radiance fields are a powerful representation for generative image synthesis, leading to 3D consistent models that render with high fidelity.
Researcher Affiliation Academia Katja Schwarz Yiyi Liao Michael Niemeyer Andreas Geiger Autonomous Vision Group MPI for Intelligent Systems and University of Tübingen {firstname.lastname}@tue.mpg.de
Pseudocode No The paper describes the model architecture and training procedures in text and with diagrams (e.g., Figure 2), but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes We release our code and datasets at https://github.com/autonomousvision/graf.
Open Datasets Yes We consider two synthetic and three real-world datasets in our experiments. ... We render 150k Chairs from Photoshapes [49]... We further use the Carla Driving simulator [12] to create 10k images... We use the Faces dataset which comprises celeb A [31] and celeb A-HQ [24]... In addition, we consider the Cats dataset [73] and the Caltech-UCSD Birds-200-2011 [66] dataset.
Dataset Splits No The paper mentions using datasets for training and evaluation but does not specify the exact percentages or methods for creating training, validation, and test splits (e.g., 80/10/10 split or specific sample counts for each split).
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU specifications, or memory.
Software Dependencies No The paper mentions algorithms and techniques like "RMSprop [27]", "spectral normalization [37]", and "instance normalization [65]", but it does not specify any software libraries, frameworks, or their version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes We use spectral normalization [37] and instance normalization [65] in our discriminator and train our approach using RMSprop [27] with a batch size of 8 and a learning rate of 0.0005 and 0.0001 for generator and discriminator, respectively. At inference, we randomly sample zs, za and ξ, and predict a color value for all pixels in the image.