Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
ReferSplat: Referring Segmentation in 3D Gaussian Splatting
Authors: Shuting He, Guangquan Jie, Changshuo Wang, Yun Zhou, Shuming Hu, Guanbin Li, Henghui Ding
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that Refer Splat achieves state-of-the-art performance on both open-vocabulary 3DGS segmentation and the newly proposed referring 3DGS segmentation tasks. |
| Researcher Affiliation | Academia | 1Mo E Key Laboratory of Interdisciplinary Research of Computation and Economics, Shanghai University of Finance and Economics, Shanghai, China 2Institute of Big Data, College of Computer Science and Artificial Intelligence, Fudan University, Shanghai, China 3Nanyang Technological University, Singapore 4Sun Yat-sen University, Guangzhou, China. |
| Pseudocode | No | The paper describes the method using textual descriptions and mathematical equations but does not include a dedicated pseudocode or algorithm block. |
| Open Source Code | Yes | Dataset and code are available at https://github.com/heshuting555/Refer Splat. |
| Open Datasets | Yes | Dataset and code are available at https://github.com/heshuting555/Refer Splat. |
| Dataset Splits | Yes | Each scene contains approximately five expressions per object, with 236 language descriptions used for training and 59 for testing, totaling 295 descriptions for 59 objects. |
| Hardware Specification | Yes | Training is conducted on an NVIDIA RTX A6000 GPU. |
| Software Dependencies | No | The paper mentions using BERT for text embeddings and the Adam optimizer, and modifying the CUDA kernel, but does not provide specific version numbers for these or other software libraries. |
| Experiment Setup | Yes | We optimize the Gaussian referring features for 45,000 iterations, using a learning rate of 0.0025, while other parameters, such as the MLP, are trained with a learning rate of 0.0001. For hyper-parameter optimization, we set dr, D, ϵ, and λ to 16, 128, 0.3, and 0.02, respectively. |