DiffGS: Functional Gaussian Splatting Diffusion

Authors: Junsheng Zhou, Weiqi Zhang, Yu-Shen Liu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiment For unconditional generation of 3D Gaussian Splatting, we conduct experiments under the airplane and chair classes of Shape Net [6] dataset. Following previous works [42, 4], we report two widely-used image generation metrics Fréchet Inception Distance (FID) [20] and Kernel Inception Distance (KID) [3] for evaluating the rendering quality of our proposed Diff GS and previous state-of-the-art works.
Researcher Affiliation Academia Junsheng Zhou Weiqi Zhang Yu-Shen Liu School of Software, Tsinghua University, Beijing, China {zhou-js24,zwq23}@mails.tsinghua.edu.cn liuyushen@tsinghua.edu.cn
Pseudocode No The paper describes algorithms such as the Gaussian Extraction Algorithm in Section 3.4, but it does not present them in a structured pseudocode block or a clearly labeled algorithm figure.
Open Source Code Yes We provide our demonstration code as a part of our supplementary materials. We will release the source code, data and instructions upon acceptance.
Open Datasets Yes For unconditional generation of 3D Gaussian Splatting, we conduct experiments under the airplane and chair classes of Shape Net [6] dataset.
Dataset Splits No The paper mentions splitting the dataset into train/test sets for Shape Net ('we split the airplane and chair classes of the Shape Net dataset into train/test sets') but does not explicitly provide details about a validation split, its percentages, or counts.
Hardware Specification Yes Inference time is measured on a single NVIDIA RTX 3090 GPU.
Software Dependencies No The paper mentions 'Pytorch Lightning' for implementation and 'Adam optimizer' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We leverage the Adam optimizer with a learning rate of 0.0001.