Neural Transmitted Radiance Fields

Authors: Chengxuan Zhu, Renjie Wan, Boxin Shi

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed method achieves superior performance from the experiments on a newly collected dataset compared with state-of-the-art methods. We report quantitative performance using PSNR, SSIM and LPIPS [36]. We conduct several experiments to evaluate the benefits of these two parts.
Researcher Affiliation Academia Chengxuan Zhu Nat l Eng. Research Center of Visual Technology School of Computer Science Peking University peterzhu@pku.edu.cn Renjie Wan Department of Computer Science Hong Kong Baptist University renjiewan@comp.hkbu.edu.hk Boxin Shi Nat l Eng. Research Center of Visual Technology School of Computer Science Peking University shiboxin@pku.edu.cn
Pseudocode No The paper describes its method through text and mathematical equations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code and data is available at https://github.com/FreeButUselessSoul/TNeRF.
Open Datasets Yes Our experiments are based on a real-world dataset we collect. This dataset contains 8 different real-world scenes, each consisting of 20 to 30 mixture images with different poses. Specifically, 4 scenes are with the ground truth for quantitative evaluations in the experiments. [...] Our code and data is available at https://github.com/FreeButUselessSoul/TNeRF. We also test our network on the LLFF dataset [35] and the RFFR dataset [32].
Dataset Splits No The paper states 'All the results in Section 5 are obtained using six views for training' which indicates training data, but it does not specify explicit train/validation/test dataset splits, percentages, or sample counts for validation.
Hardware Specification Yes We optimize a single model for about 100K iterations on two NVIDIA V100 GPUs.
Software Dependencies No The paper states 'We implement our framework using Py Torch', but it does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes In the training and testing phase, two eight-layer MLPs with 256 channels are used to predict colors c and densities σ corresponding to the transmitted and reflection scenes. We train a coarse network along with a fine network network for importance sampling. We sample 64 points along each ray in the coarse model and 64 points in the fine model. A batch contains an image patch of 32 32 pixels, equivalent to 1024 rays. Similar to the settings in Ne RF [1], positional encoding is applied to input location before they are passed into the MLPs. We use the Adam optimizer with defaults values β1 = 0.999, β2 = 0.9, ε = 10 8, and a learning rate 10 4 that decays following the cosine scheduler during the optimization. We optimize a single model for about 100K iterations on two NVIDIA V100 GPUs.