Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Event-3DGS: Event-based 3D Reconstruction Using 3D Gaussian Splatting
Authors: Haiqian Han, Jianing Li, Henglu Wei, Xiangyang Ji
NeurIPS 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate the effectiveness of our Event-3DGS, we conduct experiments on the Deep Voxels synthetic dataset [38] and the real-world Event-Camera dataset[29]. For the synthetic dataset, we use seven sequences with continuous 180-degree image rotations on a gray background as the ground truth for reconstruction. |
| Researcher Affiliation | Academia | Hanqian Han Jianing Li Henglu Wei Xiangyang Ji Tsinghua University Corresponding author: EMAIL |
| Pseudocode | No | The paper describes its method through text, mathematical equations, and a diagram (Figure 1), but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available in https://github.com/lanpokn/Event-3DGS. |
| Open Datasets | Yes | To evaluate the effectiveness of our Event-3DGS, we conduct experiments on the Deep Voxels synthetic dataset [38] and the real-world Event-Camera dataset[29]. |
| Dataset Splits | No | For longer sequences, we typically utilize the initial 100 images for training and evaluate performance on separate data not employed during reconstruction. The paper mentions training on initial images and evaluating on 'separate data' but does not specify clear train/validation/test splits, percentages, or how the separate data is partitioned. |
| Hardware Specification | Yes | All experiments are conducted on an AMD Ryzen Threadripper 3970X 32-Core CPU and an NVIDIA GeForce RTX 3080 Ti GPU. |
| Software Dependencies | No | The paper mentions using 'E2VID [34]' but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions or specific library versions). |
| Experiment Setup | Yes | We set τ to 0.05 for the high-pass filter-based photovoltage contrast estimation module. In the loss function, we set α to 0.9. For synthetic experiments with low noise, β is set to 0, while for real data with higher noise, β is set to 0.5. |