A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis
Authors: Xingang Pan, Xudong XU, Chen Change Loy, Christian Theobalt, Bo Dai
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments are conducted across multiple datasets to verify the effectiveness of Shade GAN. The results show that our approach is capable of synthesizing photorealistic images while capturing more accurate underlying 3D shapes than previous generative methods. The learned distribution of 3D shapes enables various downstream tasks like 3D shape reconstruction, where our approach significantly outperforms other baselines on the BFM dataset [13]. |
| Researcher Affiliation | Collaboration | Xingang Pan1 Xudong Xu2 Chen Change Loy3 Christian Theobalt1 Bo Dai3 1Max Planck Institute for Informatics 2The Chinese University of Hong Kong 3S-Lab, Nanyang Technological University |
| Pseudocode | No | The paper describes methods and processes in text and mathematical formulations but does not include any explicit pseudocode blocks or algorithms. |
| Open Source Code | Yes | Our code will be released at https://github.com/Xingang Pan/Shade GAN. |
| Open Datasets | Yes | The datasets used include Celeb A [43], BFM [13], and Cats [44], all of which contain only unconstrained 2D RGB images. [43] Z. Liu, P. Luo, X. Wang, and X. Tang, Deep learning face attributes in the wild, in ICCV, 2015. [13] P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter, A 3d face model for pose and illumination invariant face recognition, in 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance, Ieee, 2009. [44] W. Zhang, J. Sun, and X. Tang, Cat head detection-how to effectively exploit shape and texture features, in ECCV, 2008. |
| Dataset Splits | No | The paper mentions training, testing, and evaluation on datasets, but does not specify the explicit percentages or sample counts for training, validation, and test splits needed for reproduction. It refers to a BFM test set but no training/validation splits. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running its experiments (e.g., specific GPU models, CPU types, or memory amounts). |
| Software Dependencies | No | The paper mentions using a SIREN-based MLP [45] and a convolutional neural network. However, it does not provide specific version numbers for these or other software dependencies (e.g., PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | No | The paper mentions adopting a SIREN-based MLP and a convolutional neural network, and training with a non-saturating GAN loss with R1 regularization. It states that "More implementation details are provided in the supplementary material," but these details are not present in the main paper. |