ContextGS : Compact 3D Gaussian Splatting with Anchor Level Context Model

Authors: Yufei Wang, Zhihao Li, Lanqing Guo, Wenhan Yang, Alex Kot, Bihan Wen

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of the models on several real-scene datasets, including Bungee Ne RF [32], Deep Blending [14], Mip-Ne RF360 [3], and Tanks&Temples [16]. To more comprehensively evaluate the performance of our method, following the previous prototype [5], we use all 9 scenes in Mip-Ne RF360 [3]. The detailed results of each scene are reported in the Appendix A.3. To further evaluate the performance models among a wide range of compression ratios, we use Rate-Dsitoration (RD) curves as an additional metric. [...] 5.3 Ablation studies and discussions
Researcher Affiliation Academia Yufei Wang1 Zhihao Li1 Lanqing Guo1 Wenhan Yang2 Alex C. Kot1 Bihan Wen1 1 Nanyang Technological University, Singapore 2 Peng Cheng Laboratory, China {yufei001, zhihao.li, lanqing.guo, eackot, bihan.wen}@ntu.edu.sg yangwh@pcl.ac.cn
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Homepage: https://github.com/wyf0912/Context GS [...] We will release the code and pretrained models on acceptance.
Open Datasets Yes We evaluate the performance of the models on several real-scene datasets, including Bungee Ne RF [32], Deep Blending [14], Mip-Ne RF360 [3], and Tanks&Temples [16].
Dataset Splits No The paper mentions 'training iterations' and 'training loss' but does not provide specific details on how the dataset was split into training, validation, and test sets with percentages or counts.
Hardware Specification Yes All our experiments are done using a server with RTX3090s.
Software Dependencies No The paper mentions 'efficient CUDA implementation' but does not specify other software dependencies or their version numbers.
Experiment Setup Yes We build our method based on Scaffold-GS [20]. The number of levels is set to 3 for all experiments and the target ratio among two adjacent iterations is 0.2. hc is set to 4, i.e., the dimension of the hyper-prior feature is a fourth of the anchor feature dimension. For a fair comparison, the dimension of the anchor feature f is set to 50 following [5] and we set the same λm = 5e 4. The setting of λe is discussed in Appendix A.3 since different values are used to evaluate different rate-distortion tradeoffs. For a fair comparison, we use the same training iterations with Scaffold-GS [20] and HAC [5], i.e., 30000 iterations. Besides, we use the same hyperparameters for anchor growing as Scaffold-GS [20] so that the final model has a similar or even smaller number of anchors, leading to faster rendering speed.