MeshDiffusion: Score-based Generative 3D Mesh Modeling
Authors: Zhen Liu, Yao Feng, Michael J. Black, Derek Nowrouzezahrai, Liam Paull, Weiyang Liu
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our model on multiple generative tasks. We validate the superiority of the visual quality of our generated samples qualitatively with different rendered views and quantitatively by proxy metrics. We further conduct ablation studies to show that our design choices are necessary and well suited for the task of 3D mesh generation. |
| Researcher Affiliation | Academia | Zhen Liu1,2 , Yao Feng2,3, Michael J. Black2, Derek Nowrouzezahrai4, Liam Paull1, Weiyang Liu2,5 1Mila, Université de Montréal 2Max Planck Institute for Intelligent Systems Tübingen 3ETH Zürich 4Mc Gill University 5University of Cambridge |
| Pseudocode | Yes | Algorithm 1 Training and Inference; Algorithm 2 Conditional Generation |
| Open Source Code | Yes | Project Page: meshdiffusion.github.io |
| Open Datasets | Yes | In our experiments, Shape Net datasets [7] |
| Dataset Splits | Yes | We use the same train/test split in [55]. |
| Hardware Specification | Yes | Fitting each object takes roughly 20-30 minutes on a single Quadro RTX 6000 GPU. ... train the discrete-time category-specific diffusion models for all datasets for total 90k iterations with batch size 48 on 8 A100-80GB GPUs. |
| Software Dependencies | No | The paper mentions using a 3D U-Net and DDPM but does not specify version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages. |
| Experiment Setup | Yes | We train the discrete-time category-specific diffusion models for all datasets for total 90k iterations with batch size 48 on 8 A100-80GB GPUs. The training process typically takes 2 to 3 days. ... We set αimage = αChamfer = 1.0 and αdepth = 100.0. We set αSDF to 0.2 and use a linear decay of the scale towards 0.01. We use an Adam optimizer for all the parameters with a learning rate of 5e 4 and (β1, β2) = (0.9, 0.999). We train both reconstruction passes with 5000 iterations. |