BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images
Authors: Thu H. Nguyen-Phuoc, Christian Richardt, Long Mai, Yongliang Yang, Niloy Mitra
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that using explicit 3D features to represent objects allows Block GAN to learn disentangled representations both in terms of objects (foreground and background) and their properties (pose and identity). |
| Researcher Affiliation | Collaboration | Thu Nguyen-Phuoc University of Bath Christian Richardt University of Bath Long Mai Adobe Research Yong-Liang Yang University of Bath Niloy Mitra Adobe Research & UCL |
| Pseudocode | No | The paper describes the architecture of Block GAN and its components, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/thunguyenphuoc/Block GAN. |
| Open Datasets | Yes | These datasets include the synthetic CLEVRn [21], SYNTH-CARn and SYNTH-CHAIRn, and the real REAL-CAR [54], where n is the number of foreground objects. |
| Dataset Splits | No | The paper mentions training on various datasets (e.g., CLEVRn, SYNTH-CARn, REAL-CAR) and evaluates performance using KID scores, but it does not explicitly provide specific details on how these datasets were split into training, validation, or test sets (e.g., percentages or sample counts for each split). |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9, and CUDA 11.1') in the main text. |
| Experiment Setup | No | The paper mentions general implementation details like image resolution ('64x64 pixels'), assumptions about object types, and the use of non-saturating GAN loss and style discriminator loss, but it defers specific numerical hyperparameters (e.g., learning rate, batch size, number of epochs) to the supplemental material. |