Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
InfoGS: Efficient Structure-Aware 3D Gaussians via Lightweight Information Shaping
Authors: Yunchao Zhang, Guandao Yang, Leonidas Guibas, Yanchao Yang
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed technique is evaluated on challenging scenes and demonstrates significant performance improvements in 3D object segmentation and promoting scene interactions, while inducing low computation and memory requirements. We evaluate the proposed MI shaping technique across various scene-editing applications, including 3D object removal, inpainting, colorization, and scene recomposition. 4 EXPERIMENTS |
| Researcher Affiliation | Academia | Yunchao Zhang1 Guandao Yang2 Leonidas Guibas2 Yanchao Yang1 1 The University of Hong Kong 2 Stanford University |
| Pseudocode | No | The paper describes methods and processes through textual descriptions and mathematical equations but does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at: https://github.com/Styles Zhang/Info GS. |
| Open Datasets | Yes | We test Info GS on open-world datasets LERF-Localization (Kerr et al., 2023), derived LERF-Mask dataset (Ye et al., 2023) with ground-truth segmentation, and Mip-Ne RF 360 (Barron et al., 2022), in order to evaluate the segmentation and editing quality in complex and compositional scenarios. We also test the effectiveness of our shaping on the real dynamic scenes in the D-Ne RF dataset (Pumarola et al., 2021). Additionally, we provide results on the outdoor unbounded scenes in dataset NERDS 360 (Irshad et al., 2023) in the Appendix B. |
| Dataset Splits | No | The paper mentions using several datasets for evaluation but does not provide specific details on how these datasets were split into training, validation, and test sets, such as percentages or sample counts. |
| Hardware Specification | Yes | The finetuning stage is conducted on a single RTX 3090 GPU for about 1 minute. |
| Software Dependencies | No | The paper mentions using the Adam optimizer but does not specify versions of programming languages, deep learning frameworks (e.g., PyTorch, TensorFlow), or other key software libraries with their specific version numbers. |
| Experiment Setup | Yes | During training, we set the hyperparameters λMI = 0.1, λR = 0.1 and k = 5. We adopt the Adam optimizer Kingma & Ba (2014) when shaping the attribute decoding network, with a learning rate of 0.01. ... We train the network for 1500 iterations, sampling a random view and a batch of 512 3D Gaussians at each iteration to compute the training loss. |