DiffComplete: Diffusion-based Generative 3D Shape Completion

Authors: Ruihang Chu, Enze Xie, Shentong Mo, Zhenguo Li, Matthias Niessner, Chi-Wing Fu, Jiaya Jia

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiment4.1 Experimental Setup4.2 Main ResultsTable 1: Quantitative shape completion results on objects of known categoriesWe evaluate on two large-scale shape completion benchmarks: 3D-EPN [14] and Patch Complete [15].
Researcher Affiliation Collaboration Ruihang Chu1 Enze Xie 2 Shentong Mo3 Zhenguo Li2 Matthias Nießner4 Chi-Wing Fu1 Jiaya Jia1,5 1The Chinese University of Hong Kong 2Huawei Noah s Ark Lab 3MBZUAI 4Technical University of Munich 5Smart More
Pseudocode No The paper describes the proposed method in detail and illustrates its architecture in Figure 2, but it does not include a formal pseudocode or algorithm block.
Open Source Code No The abstract provides a project website link 'https://ruihangchu.com/diffcomplete.html', but the paper text does not contain an explicit statement about releasing the source code or a direct link to a code repository for the described methodology.
Open Datasets Yes We evaluate on two large-scale shape completion benchmarks: 3D-EPN [14] and Patch Complete [15]. ... It includes both the synthetic data from Shape Net [73] and the challenging real data from Scan Net [75].
Dataset Splits Yes For a fair comparison, we follow their data splits and evaluation metrics, i.e., mean l1 error on the TUDF predictions across all voxels on 3D-EPN, and l1 Chamfer Distance (CD) and Intersection over Union (Io U) between the predicted and ground-truth shapes on Patch Complete.
Hardware Specification Yes We first train our network using a single partial scan as input by 200k iterations on four RTX3090 GPUs, taking around two days.
Software Dependencies No The paper mentions 'Adam optimizer [76]' and the use of GPUs, implying certain software like PyTorch or TensorFlow, but does not provide specific version numbers for any libraries, frameworks, or operating systems used in the experiments.
Experiment Setup Yes Implementation details. We first train our network using a single partial scan as input by 200k iterations on four RTX3090 GPUs, taking around two days. If multiple conditions are needed, we finetune project layers ψ for additional 50k iterations. Adam optimizer [76] is employed with a learning rate of 1e 4 and the batch size is 32.