DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes

Authors: Jia-Wei Liu, Yan-Pei Cao, Weijia Mao, Wenqiao Zhang, David Junhao Zhang, Jussi Keppo, Ying Shan, Xiaohu Qie, Mike Zheng Shou

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate De VRF on both synthetic and real-world dynamic scenes with different types of deformation. Experiments demonstrate that De VRF achieves two orders of magnitude speedup (100 faster) with on-par high-fidelity results compared to the previous state-of-the-art approaches.
Researcher Affiliation Collaboration Jia-Wei Liu1 , Yan-Pei Cao2, Weijia Mao1, Wenqiao Zhang4, David Junhao Zhang1, Jussi Keppo5,6, Ying Shan2, Xiaohu Qie3, Mike Zheng Shou1 1 Show Lab, National University of Singapore 2 ARC Lab, 3 Tencent PCG 4 National University of Singapore 5 Business School, National University of Singapore 6 Institute of Operations Research and Analytics, National University of Singapore
Pseudocode No The paper describes its methods through text and mathematical equations but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes The code and dataset are released in https://github.com/showlab/De VRF.
Open Datasets Yes The code and dataset are released in https://github.com/showlab/De VRF.
Dataset Splits No For each scene, we use 100-view static images and 4-view dynamic sequences with 50 frames (i.e., time steps) as training data for all approaches, and randomly select another 2 views at each time step for test.
Hardware Specification Yes We run all experiments on a single NVIDIA Ge Force RTX3090 GPU.
Software Dependencies No The paper mentions using a pre-trained RAFT model but does not specify software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes During training, we set ωRender = 1, ωCycle = 100, ωFlow = 0.005, and ωTV = 1 for all scenes.