LP-3DGS: Learning to Prune 3D Gaussian Splatting
Authors: Zhaoliang Zhang, Tianchen Song, Yongjae Lee, Li Yang, Cheng Peng, Rama Chellappa, Deliang Fan
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have shown that LP-3DGS consistently achieves a good balance between efficiency and high quality. We conducted comprehensive experiments on state-of-the-art (So TA) 3D scene datasets, including Mip Ne RF360 (Barron et al. [2022]), Ne RF-Synthetic (Mildenhall et al. [2021]), and Tanks & Temples (Knapitsch et al. [2017]). |
| Researcher Affiliation | Academia | Zhaoliang Zhang Johns Hopkins University Baltimore, MD 21218 zzhan288@jh.edu Tianchen Song Johns Hopkins University Baltimore, MD 21218 tsong15@jh.edu Yongjae Lee Arizona State University Tempe, AZ 85281 ylee298@asu.edu Li Yang University of North Carolina at Charlotte Charlotte, NC 28223 lyang50@uncc.edu Cheng Peng Johns Hopkins University Baltimore, MD 21218 cpeng26@jhu.edu Rama Chellappa Johns Hopkins University Baltimore, MD 21218 rchella4@jhu.edu Deliang Fan Arizona State University Tempe, AZ 85281 dfan@asu.edu |
| Pseudocode | No | The paper describes the method using mathematical formulations and descriptive text, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or figures. |
| Open Source Code | Yes | Code: https://github.com/dexgfsdfdsg/LP-3DGS.git |
| Open Datasets | Yes | We test our method on two of the most popular real-world datasets: Mip Ne RF360 dataset (Barron et al. [2022]), which contains 9 scenes, and the Train and Truck scenes from the Tanks & Temples dataset (Knapitsch et al. [2017]). We also evaluate our method on the Ne RF-Synthetic dataset (Mildenhall et al. [2021]), which includes 8 synthetic scenes. |
| Dataset Splits | No | The paper uses well-known public datasets (Mip Ne RF360, Ne RF-Synthetic, Tanks & Temples) and mentions training for 30,000 iterations. However, it does not explicitly provide the percentages or counts for training, validation, and test splits within the paper, only referring to 'training images' and later evaluating on 'test' sets without specific validation split details. |
| Hardware Specification | Yes | The machine running the experiments is equipped with an AMD 5955WX processor and two Nvidia A6000 GPUs. |
| Software Dependencies | No | The paper does not explicitly list specific software dependencies (e.g., library names with version numbers) in the main text. It only mentions the operating hardware and refers to instructions in the code repository. |
| Experiment Setup | Yes | We train each scene under every setting for 30,000 iterations and with the mask training occurring from iteration 19,500 to 20,000, updating the importance score every 20 iterations. The value of τ in Equation 7 is 0.5 and the coefficient λm of mask loss is 5e-4. |