Volume Feature Rendering for Fast Neural Radiance Field Reconstruction

Authors: Kang Han, Wei Xiang, Lu Yu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experimental results
Researcher Affiliation Academia Kang Han Wei Xiang Lu Yu School of Computing, Engineering and Mathematical Sciences La Trobe University {k.han, w.xiang, l.yu}@latrobe.edu.au
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly state that source code for the described methodology is provided, nor does it include a link to a code repository.
Open Datasets Yes We implement the proposed VFR using the Nerf Acc library [16], which is based on the deep learning framework Py Torch [22]. The learnable features are organized by a multiresolution hash grid (MHG) [19]. This feature representation models the feature function F in (3) that accepts the input of the position and outputs a feature vector. The color NN contains a spatial MLP and a directional MLP. The spatial MLP has two layers and the directional MLP has four layers, all with 256 hidden neurons. The size of the bottleneck feature is set to 256. We use the GELU [14] instead of the commonly used Re LU [1] activation function, as we found the GELU results in a slightly better quality. The density of a sample xi is derived from the queried feature vector F(xi) by a tiny density mapping layer. On the real-world 360 dataset [3], the mapping layer is a linear transform from the feature vector to the density value. On the Ne RF synthetic [18] dataset, we found a tiny network with one hidden layer and 64 neurons yields a slightly better rendering quality.
Dataset Splits No The paper does not provide specific dataset split information for training, validation, and testing. It mentions a 'hierarchical importance sampling strategy' and a 'final number of samplings on the radiance field is 32', which are related to data sampling within the model, not a formal dataset split for reproduction.
Hardware Specification Yes The running times are measured on one RTX 3090 GPU.
Software Dependencies No The paper mentions using 'Nerf Acc library [16]' and 'Py Torch [22]' but does not specify their version numbers.
Experiment Setup Yes The color NN contains a spatial MLP and a directional MLP. The spatial MLP has two layers and the directional MLP has four layers, all with 256 hidden neurons. The size of the bottleneck feature is set to 256. We use the GELU [14] instead of the commonly used Re LU [1] activation function, as we found the GELU results in a slightly better quality. The models are trained using the Adam optimizer with a learning rate of 0.01 and the default learning rate scheduler in Nerf Acc [16].