Sync-NeRF: Generalizing Dynamic NeRFs to Unsynchronized Videos
Authors: Seoha Kim, Jeongmin Bae, Youngsik Yun, Hahyun Lee, Gun Bang, Youngjung Uh
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments are conducted on the common Plenoptic Video Dataset and a newly built Unsynchronized Dynamic Blender Dataset to verify the performance of our method. |
| Researcher Affiliation | Academia | 1Yonsei University 2Electronics and Telecommunications Research Institute |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Project page: https://seoha-kim.github.io/sync-nerf |
| Open Datasets | Yes | The Plenoptic Video Dataset (Li et al. 2022b) contains six challenging real-world scenes with varying degrees of dynamics. ... The dataset is publicly available. |
| Dataset Splits | No | The paper mentions "training views" and "test views" for evaluation, but it does not explicitly provide specific percentages, sample counts, or detailed methodology for how the datasets were split for training, validation, and testing. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or their version numbers (e.g., Python, PyTorch, CUDA versions) needed to replicate the experiment. |
| Experiment Setup | Yes | To capture the rapid scene motion, we set L = 10 in all experiments. |