LION: Latent Point Diffusion Models for 3D Shape Generation
Authors: xiaohui zeng, Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, Karsten Kreis
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, LION achieves state-of-the-art generation performance on multiple Shape Net benchmarks. |
| Researcher Affiliation | Collaboration | 1NVIDIA 2University of Toronto 3Vector Institute |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | We will release code and instructions to reproduce all experiments upon acceptance of the manuscript. The internal guidelines of our institution prevent us from releasing code at this stage. |
| Open Datasets | Yes | To compare LION against existing methods, we use Shape Net [104], the most widely used dataset to benchmark 3D shape generative models. |
| Dataset Splits | Yes | Following previous works [31, 46], we train on three categories: airplane, chair, car. Also like previous methods, we primarily rely on Point Flow s [31] dataset splits and preprocssing. |
| Hardware Specification | Yes | All experiments are performed on NVIDIA DGX servers with NVIDIA A100 GPUs. |
| Software Dependencies | No | The paper mentions several software tools and libraries used (e.g., PyTorch, Mit Suba renderer), but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | Our LION models use a batch size of 256 for all experiments. The encoder and decoder were trained with a learning rate of 1e-4 for 100 epochs, while the latent DDMs were trained with a learning rate of 2e-4 for 500 epochs. We use Adam optimizer... |