Compact Neural Volumetric Video Representations with Dynamic Codebooks
Authors: Haoyu Guo, Sida Peng, Yunzhi Yan, Linzhan Mou, Yujun Shen, Hujun Bao, Xiaowei Zhou
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on the NHR and Dy Ne RF datasets demonstrate that the proposed approach achieves state-of-the-art rendering quality, while being able to achieve more storage efficiency. |
| Researcher Affiliation | Collaboration | 1Zhejiang University 2Ant Group |
| Pseudocode | No | The paper does not contain any clearly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | The source code is available at https://github.com/zju3dv/compact_vv. |
| Open Datasets | Yes | To evaluate the performance of our approach, we conduct experiments on NHR [48] and Dy Ne RF [17] datasets. |
| Dataset Splits | No | Following the setting of Dy Map [30], we select 100 frames from each video and use 90 percent of camera views for training and the rest for testing. ... We use all 300 frames and follow the same training and testing split as in [17]. |
| Hardware Specification | Yes | We train the model on one single NVIDIA A100 GPU |
| Software Dependencies | No | We implement our method with Py Torch [29]. However, no specific version number for PyTorch or any other software dependency is provided. |
| Experiment Setup | Yes | We set the resolution of the feature planes in the spatial dimension as 256 on NHR and 640 on Dy Ne RF dataset, and the one in the temporal dimension is set as 100 on NHR and 150 on Dy Ne RF dataset. ... We set the learning rate as 2e-3 for MLP parameters and 0.03 for feature planes, and use Adam optimizer to train the network with batches of 4096 rays. |