Convolutional Generation of Textured 3D Meshes
Authors: Dario Pavllo, Graham Spinks, Thomas Hofmann, Marie-Francine Moens, Aurelien Lucchi
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the efficacy of our method on Pascal3D+ Cars and CUB, both in an unconditional setting and in settings where the model is conditioned on class labels, attributes, and text. |
| Researcher Affiliation | Academia | Dario Pavllo Dept. of Computer Science ETH Zurich Graham Spinks Dept. of Computer Science KU Leuven Thomas Hofmann Dept. of Computer Science ETH Zurich Marie-Francine Moens Dept. of Computer Science KU Leuven Aurelien Lucchi Dept. of Computer Science ETH Zurich |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We release our code and pretrained models at https://github.com/dariopavllo/convmesh. |
| Open Datasets | Yes | We evaluate our method on two datasets with annotated keypoints, and use the implementation of [24] to estimate the pose from keypoints using structure-from-motion. CUB-200-2011 [52] We use the train/test split of [24]... Pascal3D+ (P3D) [57] We use the cars subset... |
| Dataset Splits | No | The paper mentions "train/test split" for CUB and "training images" for P3D, but does not explicitly detail a validation set or its split. |
| Hardware Specification | Yes | We train with a batch size of 50 on a single Pascal GPU, which requires 12 hours. |
| Software Dependencies | No | The paper mentions software like Adam, DIB-R, and Mask R-CNN, but does not provide specific version numbers for them. |
| Experiment Setup | Yes | The model (Fig. 1) is trained for 1000 epochs using Adam [28], with an initial learning rate of 10 4 halved every 250 epochs. We train with a batch size of 50 on a single Pascal GPU, which requires 12 hours. (...) train for 600 epochs with a constant learning rate of 0.0001 for G and 0.0004 for D (two time-scale update rule [19]). We update D twice per G update, and evaluate the model on a running average of G s weights (β = 0.999) as proposed by [64, 25, 26, 4]. (...) For all experiments, we use a total batch size of 32 and we employ synchronized batch normalization across multiple GPUs. |