Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Optimizing 4D Gaussians for Dynamic Scene Video from Single Landscape Images
Authors: In-Hwan Jin, Haesoo Choo, Seong-Hun Jeong, Heemoon Park, Junghwan Kim, Oh-joon Kwon, Kyeongbo Kong
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our model demonstrates the ability to provide realistic immersion in various landscape images through diverse experiments and metrics. Extensive experimental results are https://cvsp-lab.github.io/ICLR2025_3D-MOM/. [...] 4 EXPERIMENTS [...] 4.2 QUANTITATIVE RESULTS [...] 4.3 QUALITATIVE RESULTS [...] 4.4 ABLATION STUDY |
| Researcher Affiliation | Collaboration | 1Pusan National University 2Pukyong National University 3Busan Munhwa Broadcasting Corporation 4Korea University 5DM Studio |
| Pseudocode | No | The paper describes methods using mathematical formulations (e.g., Equation 7) and textual steps, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code used in our research is available in full on the Git Hub page at https://github.com/ cvsp-lab/ICLR2025_3D-MOM, where detailed usage instructions are also provided. |
| Open Datasets | Yes | Following (Li et al., 2023), we evaluated our method and the baselines using the validation set from Holynski et al. (Holynski et al., 2021). [...] The Holinsky dataset (Holynski et al., 2021), which was utilized in the research, is an existing public dataset that allows for experimentation on quantitative results. |
| Dataset Splits | No | The paper mentions evaluating on a validation set from Holynski et al. (Holynski et al., 2021) and rendering 240 ground truth frames for each sample for evaluation. It also states for training 4D Gaussians, "In step 1, we trained 3D Gaussians using all viewpoints, and in step 2, we trained 4D Gaussians using videos from sampled viewpoints." However, it does not provide explicit numerical details (percentages or counts) for standard training, validation, and test splits for their own model training setup, beyond the evaluation set. |
| Hardware Specification | Yes | We conduct all experiments on a single NVIDIA Ge Force RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions using specific models like Zoe Depth (Bhat et al., 2023), Holynski et al. (Holynski et al., 2021)'s method, and the pre-trained single-image animation model from SLR-SFS (Fan et al., 2023). However, it does not provide specific version numbers for these software components or any other foundational libraries (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | Our 3D motion optimization module is trained for about 200 iterations with a batch size of 30 using the SGD Optimizer. We set the initial learning rate at 0.5 and then decayed it exponentially. |