UE4-NeRF:Neural Radiance Field for Real-Time Rendering of Large-Scale Scene

Authors: Jiaming Gu, Minchao Jiang, Hongsheng Li, Xiaoyuan Lu, Guangming Zhu, Syed Afaq Ali Shah, Liang Zhang, Mohammed Bennamoun

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results demonstrate that our method attains rendering quality on par with state-of-the-art approaches, while additionally offering the advantage of real-time performance. Furthermore, rendering within UE4 facilitates scene editing in subsequent stages.
Researcher Affiliation Collaboration 1School of Computer Science and Technology, Xidian University 2Algorithm R&D Center, Qing Yi (Shanghai) 3Edith Cowan University 4 The University of Western Australia
Pseudocode No The paper describes its proposed method in prose and equations (e.g., Eq. 1-8), but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Project page: https://jamchaos.github.io/UE4-Ne RF/.
Open Datasets Yes We also performed comparative experiments on the dataset introduced in Mega-Ne RF[40].
Dataset Splits No The paper mentions 'training process' multiple times but does not explicitly provide details about train/validation/test dataset splits (e.g., percentages or sample counts) for reproducibility, either for their custom datasets or how they used splits for cited comparative datasets.
Hardware Specification Yes For each block, we train the model using one Nvidia RTX 3090 GPU for 80,000 epochs to achieve convergence, with an approximate duration of 40 minutes.
Software Dependencies No The paper mentions using UE4, CUDA, and HLSL fragment shaders but does not provide specific version numbers for any of the software dependencies, which would be necessary for exact replication.
Experiment Setup Yes Similar to the Instant-NGP approach, we employ multi-resolution hash encoding and construct an Encoder network comprising a 4-layer MLP (32 64,64 64,64 64,64 8) to estimate the opacity and an 8-dimensional feature vector. Additionally, we develop a Decoder network with 3-layer MLP (17 16,16 16,16 3) to predict the final color of the sampled points. For each block, we train the model using one Nvidia RTX 3090 GPU for 80,000 epochs to achieve convergence, with an approximate duration of 40 minutes.