Pseudo-Generalized Dynamic View Synthesis from a Video
Authors: Xiaoming Zhao, R Alex Colburn, Fangchang Ma, Miguel Ángel Bautista, Joshua M. Susskind, Alex Schwing
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct quantitative evaluations on the NVIDIA Dynamic Scenes data (Yoon et al., 2020) and the Dy Check i Phone data (Gao et al., 2022a). The former consists of eight dynamic scenes captured by a synchronized rig with 12 forward-facing cameras. |
| Researcher Affiliation | Collaboration | Xiaoming Zhao1,2 Alex Colburn1 Fangchang Ma1 Miguel Angel Bautista1 Joshua M. Susskind1 Alexander G. Schwing1 1Apple 2 University of Illinois Urbana-Champaign |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | For more information see project page at https://xiaoming-zhao.github.io/projects/pgdvs. |
| Open Datasets | Yes | We conduct quantitative evaluations on the NVIDIA Dynamic Scenes data (Yoon et al., 2020) and the Dy Check i Phone data (Gao et al., 2022a). |
| Dataset Splits | No | The paper describes how novel views are held out for evaluation ("remaining 11 held-out views at each time step") which refers to the test set. It does not explicitly specify training or validation splits for its own method, as it primarily adapts a pre-trained model. |
| Hardware Specification | No | The paper lists hardware specifications for baseline methods (e.g., V100, TPUv4, A100s, A4000s in Table 1 and 2), but for its own method, it states "Not Needed" for appearance fitting hardware, implying it does not require specific training hardware. It does not explicitly state the hardware used for running its inference or evaluation process. |
| Software Dependencies | No | The paper mentions several software tools like COLMAP (Sch onberger & Frahm, 2016), RAFT (Teed & Deng, 2020), One Former (Jain et al., 2023), Segment-Anything-Model (Kirillov et al., 2023), Zoe Depth (Bhat et al., 2023), TAPIR (Doersch et al., 2023), Co Tracker (Karaev et al., 2023), Mi Da S (Ranftl et al., 2022) and DPT (Ranftl et al., 2021). While the papers for these tools are cited with their publication year, specific software version numbers are not provided, which is required for a reproducible description. |
| Experiment Setup | Yes | Throughout our experiments, we have Nspatial = 10... we use Ncluster = 40... we set the coefficient α for the importance metric to 100... Throughout our experiments, we use δ = 0.1. |