Generative Occupancy Fields for 3D Surface-Aware Image Synthesis
Authors: Xudong XU, Xingang Pan, Dahua Lin, Bo Dai
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments. Comparison with baselines. To validate the effectiveness of GOF, we compare it with two representative GRAF methods, namely GRAF [7] and pi-GAN. Firstly, Fig. 3 demonstrates the qualitative comparison between these three methods... Table 1: Quantitative results (128 128 px) on BFM, Celeb A and Cats datasets, on three metrics Fréchet Inception Distance (FID), Inception Score (IS) and the weighted variance of sampled depth Σti( 10 4). |
| Researcher Affiliation | Academia | Xudong Xu Xingang Pan Dahua Lin Bo Dai CUHK Sense Time Joint Lab, The Chinese University of Hong Kong Max Planck Institute for Informatics S Lab, Nanyang Technological University {xx018, dhlin}@ie.cuhk.edu.hk xpan@mpi-inf.mpg.de bo.dai@ntu.edu.sg |
| Pseudocode | No | The paper describes its method through text and mathematical equations, but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/Sheldon Tsui/GOF_Neur IPS2021. |
| Open Datasets | Yes | To assess our method comprehensively, we conduct experiments on three datasets, namely Celeb A [44], BFM [45], and Cats [46]. |
| Dataset Splits | No | The paper states that it learns from unposed images and mentions using a 'test split of BFM' for evaluating a separate CNN trained on generated data, but it does not specify explicit train/validation/test splits for the GOF model's training process. |
| Hardware Specification | Yes | To compare the efficiency straightforwardly, we estimate the rendering speed of 256 256 images for both pi-GAN and GOF on a single Intel Xeon(R) CPU. |
| Software Dependencies | No | The paper provides implementation details for its model parameters but does not list any specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Unless stated otherwise, in all experiments we set N, the number of points sampled for rendering, to 12, and set M, the number of bins used in root-finding, to 12... In practice, the number of iterations is set to ms = 3 times... In practice, we empirically set τ as 0.5. |