R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction
Authors: Ruyi Zha, Tao Jun Lin, Yuanhao Cai, Jiwen Cao, Yanhao Zhang, Hongdong Li
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic and real-world datasets demonstrate that our method outperforms state-of-the-art approaches in accuracy and efficiency. |
| Researcher Affiliation | Academia | 1The Australian National University 2Johns Hopkins University 3Robotics Institute, University of Technology Sydney |
| Pseudocode | No | The paper describes algorithms in text but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and models are available on the project page https://github.com/Ruyi-Zha/r2_gaussian. |
| Open Datasets | Yes | For the synthetic dataset, we collect 15 real CT volumes... For real-world experiments, we use three cases from the FIPS dataset [56]... Refer to Appendix B for more details of datasets. (Appendix B: LIDC-IDRI [4] and Pancreas-CT [47], X-Plant [58], Sci Vis [26], FIPS [56]) |
| Dataset Splits | No | The paper discusses training and testing but does not explicitly provide training/test/validation dataset splits or refer to a validation set. |
| Hardware Specification | Yes | All methods run on a single RTX3090 GPU. |
| Software Dependencies | No | Our R2-Gaussian is implemented in Py Torch [44] and CUDA [50]... (no version numbers provided for reproducibility) |
| Experiment Setup | Yes | Learning rates for position, density, scale, and rotation are initially set as 0.0002, 0.01, 0.005, and 0.001, respectively, and exponentially to 0.1 of their initial values. Loss weights are λssim = 0.25 and λtv = 0.05. We initialize M = 50k Gaussians with a density threshold τ = 0.05 and scaling term k = 0.15. The TV volume size is D = 32. Adaptive control runs from 500 to 15k iterations with a gradient threshold of 0.00005. |