PolyGen: An Autoregressive Generative Model of 3D Meshes

Authors: Charlie Nash, Yaroslav Ganin, S. M. Ali Eslami, Peter Battaglia

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our primary evaluation metric is log-likelihood, which we find to correlate well with sample quality. We also report summary statistics for generated meshes, and compare our model to existing approaches using chamfer-distance in the image and voxel conditioned settings.
Researcher Affiliation Industry 1Deep Mind, London, United Kingdom.
Pseudocode No The paper describes the model architecture and procedures in text but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions and links to the Draco compression library, which is a third-party tool, but does not provide a link or statement about the availability of the authors' own source code for Poly Gen.
Open Datasets Yes We train all our models on the Shape Net Core V2 dataset (Chang et al., 2015)
Dataset Splits Yes We train all our models on the Shape Net Core V2 dataset (Chang et al., 2015), which we subdivide into 92.5% training, 2.5% validation and 5% testing splits.
Hardware Specification Yes We train the vertex and face models for 1e6 and 5e5 weight updates respectively, using four V100 GPUs per training run for a total batch size of 16.
Software Dependencies No The paper mentions using Blender for data preparation and Draco for compression comparison, but it does not specify version numbers for these or any other software dependencies crucial for replication (e.g., deep learning frameworks, Python versions).
Experiment Setup Yes We use the Adam optimizer with a gradient clipping norm of 1.0, and perform cosine annealing from a maximum learning rate of 3e-4, with a linear warm up period of 5000 steps. We use a dropout rate of 0.2 for all models. Unless otherwise specified we use embeddings of size 256, fully connected layers of size 1024, and 18 and 12 Transformer blocks for the vertex and face models respectively.