A General Implicit Framework for Fast NeRF Composition and Rendering
Authors: Xinyu Gao, Ziyi Yang, Yunlu Zhao, Yuxiang Sun, Xiaogang Jin, Changqing Zou
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we first demonstrate that our pipeline indeed takes effect for various Ne RF works. Following that, we show how our framework can assemble a large number of neural objects to form a virtual 3D world. Furthermore, we show our approach can be combined with traditional rendering techniques to create a mixed rendering pipeline. We implement our framework in two versions: Pytorch version for a convenient reproduction and CUDA version for testing performance limitation. We conduct experiments on our new N-object dataset which has been described in section N -object dataset testing . |
| Researcher Affiliation | Academia | Xinyu Gao1, Ziyi Yang1, Yunlu Zhao1, Yuxiang Sun2, Xiaogang Jin1*, Changqing Zou1,2 1State Key Lab of CAD&CG, Zhejiang University 2Zhejiang Lab {22121052, Ingram14, yunlu.zhao}@zju.edu.cn, sunyuxiangyx@gmail.com jin@cad.zju.edu.cn, changqing.zou@zju.edu.cn |
| Pseudocode | Yes | The overview and data flow of our render framework is shown in Fig. 1, which is also described by the algorithm pseudo-codes in appendix. |
| Open Source Code | No | The paper states 'We implement our framework in two versions: Pytorch version for a convenient reproduction and CUDA version for testing performance limitation.' However, it does not provide an explicit statement about open-sourcing the code or a link to a repository. |
| Open Datasets | No | The paper states 'First, we collected an Nobject dataset consisting of 22 distinct Ne RF objects: 7 objects were chosen from previous work (3 from Neus dataset of real-world objects, 4 from synthesis dataset (Mildenhall et al. 2020; Zhang et al. 2022; Verbin et al. 2022)), and the remaining 15 objects were newly created by Blender 3D.' While some components are from cited works, the full 'N-object dataset' collected and created by the authors is not provided with public access details (link, DOI, specific repository). |
| Dataset Splits | No | The paper mentions 'We use the pre-trained Ne RF models to generate 500 random views (more random views, less artifacts) for supervision for each Ne DF model of a single object, and train the intersection network over 60W iterations until convergence.' However, it does not explicitly provide details about standard train/validation/test dataset splits (percentages, counts, or predefined splits) for the main experimental evaluation. |
| Hardware Specification | Yes | A single Nvidia A100 GPU is used to train Ne DF for each scene from our N-object dataset. ... CUDA version of the proposed framework is built with Vulkan, Tensor RT, and a customized CUDA kernel to enable real-time manipulation on Nvidia RTX 4090 GPU. |
| Software Dependencies | No | The paper mentions 'Pytorch version', 'CUDA version', 'Vulkan', and 'Tensor RT' as software components used in their implementation but does not specify their version numbers. |
| Experiment Setup | Yes | We employ Adam as the optimizer and set the learning rate to 5e-4. We use the pre-trained Ne RF models to generate 500 random views (more random views, less artifacts) for supervision for each Ne DF model of a single object, and train the intersection network over 60W iterations until convergence. The batchsize of training rays is set to 4,096 during each iteration. |