DDGS-CT: Direction-Disentangled Gaussian Splatting for Realistic Volume Rendering

Authors: Zhongpai Gao, Benjamin Planche, Meng Zheng, Xiao Chen, Terrence Chen, Ziyan Wu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method outperforms state-of-the-art techniques in image accuracy. Furthermore, our DDGS shows promise for intraoperative applications and inverse problems such as pose registration, delivering superior registration accuracy and runtime performance compared to analytical DRR methods. 4 Experiments
Researcher Affiliation Industry Zhongpai Gao Benjamin Planche Meng Zheng Xiao Chen Terrence Chen Ziyan Wu United Imaging Intelligence, Boston, MA {first.last}@uii-ai.com
Pseudocode No The paper describes its methods but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper states in its NeurIPS checklist: 'Does the paper provide open access to the data and code...?' Answer: [No] Justification: All data used are public, and implementation release is pending approval at the moment.'
Open Datasets Yes Datasets. We consider 4 datasets. (1) NAF-CT [42] includes four CT images of abdomen, chest, foot, and jaw. (2) CTPelvic1K [19] is a large dataset of pelvic CT images.
Dataset Splits No For each, we adopt TIGRE [5] to sample 50 evenly-distributed projections for training and 50 randomly-distributed ones for testing... We adopt Deep DRR [35] to generate 60 training (evenly-distributed) and 60 (randomly-distributed) testing DRRs... we randomly sample 900 projections for training and 100 for testing. The paper clearly defines training and testing sets but does not specify a separate validation set or its split.
Hardware Specification Yes DDGS training is performed on a single NVIDIA A100 using an Adam optimizer [18] with a learning rate of 1.25 10 4 for biso and Bdir and 2.5 10 3 for f iso and f dir.
Software Dependencies No The paper mentions using an 'Adam optimizer [18]' but does not specify software dependencies with version numbers for libraries or frameworks used in the implementation.
Experiment Setup Yes For evaluation, we set the decomposition degree to L=1, the feature dimension to k=8, and initial cloud sizes to n1=15,000 and n2=10,000. We apply a loss weight λ = 0.2. We choose the default threshold level (i.e., the average of the minimum and maximum volume) for the marching-cubes algorithm. DDGS training is performed on a single NVIDIA A100 using an Adam optimizer [18] with a learning rate of 1.25 10 4 for biso and Bdir and 2.5 10 3 for f iso and f dir. Default 3DGS [17] learning rates are applied to the remaining parameters.