Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

GigaGS: 3D Gaussian Based Planar Representation for Large-Scene Surface Reconstruction

Authors: Junyi Chen, Weicai Ye, Yifan Wang, Danpeng Chen, Di Huang, Wanli Ouyang, Guofeng Zhang, Yu Qiao, Tong He

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments are conducted on various datasets. The consistent improvement demonstrates the superiority of Giga GS.
Researcher Affiliation Collaboration Junyi Chen1,2, Weicai Ye2,3*, Yifan Wang1,2, Danpeng Chen3, Di Huang2, Wanli Ouyang2, Guofeng Zhang3, Yu Qiao2, Tong He2* 1Shanghai Jiao Tong University 2Shanghai Artificial Intelligence Laboratory 3State Key Lab of CAD&CG, Zhejiang University
Pseudocode No The paper describes the method using prose and mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://open3dvlab.github.io/Giga GS/
Open Datasets Yes We employ Giga GS on datasets consisting of real-life aerial large-scale scenes, which encompass the Building and Rubble scenes extracted from Mill-19 (Turki, Ramanan, and Satyanarayanan 2022), along with the Sci-Art, Campus, and Residence scenes sourced from Urbanscene3d (Liu, Xue, and Huang 2021).
Dataset Splits Yes To maintain consistency, we employ the same dataset partitioning as Mega Ne RF (Turki, Ramanan, and Satyanarayanan 2022).
Hardware Specification No The paper mentions distributing the training workload across "multiple GPUs" but does not specify any particular GPU models, CPU details, or other hardware specifications.
Software Dependencies No The paper does not provide specific version numbers for any software libraries, frameworks, or programming languages used in the experiments.
Experiment Setup Yes In our experiment, we reduced the side length of 4K aerial images to one-fourth of their original size and aligned them with a comparative method. Subsequently, we employed pixel-sfm (Lindenberger et al. 2021) to obtain an initial point cloud from the aerial images and performed Manhattan world alignment, aligning the y-axis perpendicular to the world coordinate axis of the ground plane. We divided the entire scene into 4 2 partitions in the case of rubble, building, residence, and sci-art, while for the largest scene, campus, we divided it into 4 4 partitions. Each partition was subjected to training for 120, 000 iterations to ensure sufficient convergence.