Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

SpikeGS: Reconstruct 3D Scene Captured by a Fast-Moving Bio-Inspired Camera

Authors: Yijia Guo, Liwen Hu, Yuanxi Bai, Jiawei Yao, Lei Ma, Tiejun Huang

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both synthetic and real-world datasets demonstrate the superiority of Spike GS compared with existing spike-based and deblurring 3D scene reconstruction methods. Our experiments demonstrated that our method not only achieves state-of-the-art 3D reconstruction quality for high-speed scenes compared to all current methods but also exhibits exceptional robustness to speed variations. We also validated and proved the feasibility of spike camera reconstruction of open outdoor scenes using synthetic datasets for the first time.
Researcher Affiliation Academia 1National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University 2National Biomedical Imaging Center, Peking University 3University of Washington EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology using text and block diagrams (e.g., Figure 2) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://spikegs.github.io.
Open Datasets No The paper states: 'We contribute Spike GS dataset, the first dataset for 3D reconstruction from high-speed real-world spike scenes, the first dataset for 3D reconstruction from color spike camera, and the first dataset featuring explicit speed annotations and varying speeds within the same scene.' However, it does not provide a direct URL, DOI, or specific repository link explicitly for the dataset's public access in the text, separate from the general code link provided for the project.
Dataset Splits No The paper describes the synthetic and real-world datasets used, including variations in speed ('high-speed spike streams have a size of 1000 1000 1500', 'low-speed spike streams have a size of 1000 1000 2500') and different scene speeds ('high, medium, and low'). However, it does not explicitly provide details about training, validation, and test splits, or reference any predefined splits for reproducibility.
Hardware Specification Yes Our code is based on 3DGS (Kerbl et al. 2023) and we train the models for 30000 iterations on one NVIDIA A800 GPU with the same optimizer and hyper-parameters as 3DGS.
Software Dependencies No The paper mentions that 'Our code is based on 3DGS (Kerbl et al. 2023)', but it does not specify any software libraries or dependencies with their version numbers.
Experiment Setup No The paper states: 'we train the models for 30000 iterations on one NVIDIA A800 GPU with the same optimizer and hyper-parameters as 3DGS.' While the number of iterations is given, the crucial details for the optimizer and other hyperparameters are deferred to the original 3DGS paper without explicit values in the current text.