Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Hybrid Mesh-Gaussian Representation for Efficient Indoor Scene Reconstruction
Authors: Binxiao Huang, Zhihao Li, Shiyong Liu, Xiao Tang, Jiajun Tang, Jiaqi Lin, Yuxin Cheng, Zhenyu Chen, Xiaofei Wu, Ngai Wong
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that the hybrid representation maintains comparable rendering quality and achieves superior frames per second FPS with fewer Gaussian primitives. |
| Researcher Affiliation | Collaboration | 1The University of Hong Kong 2Huawei Technologies Ltd 3Peking University 4Tsinghua University EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology in narrative text and figures (e.g., Figure 2 for the overall pipeline) but does not include a dedicated pseudocode or algorithm block. |
| Open Source Code | No | The paper states, "We build our method upon the open-source 3DGS code." This indicates the use of third-party open-source code but does not explicitly state that the authors' own implementation for the described methodology is publicly released or provide a link to it. |
| Open Datasets | Yes | We verify the effectiveness of our approach using ten real-world indoor scenes from publicly available datasets: four scenes from the Deep blending [Hedman et al., 2018] and six scenes from Scan Net++ [Yeshwanth et al., 2023]. |
| Dataset Splits | No | The paper mentions evaluating on "test views" but does not explicitly provide details about the specific training/test/validation dataset splits used for their experiments, such as percentages, sample counts, or a detailed splitting methodology. |
| Hardware Specification | Yes | All experiments are conducted on a single V100 GPU. |
| Software Dependencies | No | The paper mentions using "open-source 3DGS code," "PGSR [Chen et al., 2024]," and "Nvdiffrast [Laine et al., 2020]" but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | Following [Kerbl et al., 2023], we train our models for 30K iterations across all scenes and use the same densification, schedule, and hyperparameters. We set λ to zero after densification iteration (i.e 15k) of 3DGS training. |