Binary Radiance Fields

Authors: Seungjoo Shin, Jaesik Park

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments, binary radiance field representation successfully outperforms the reconstruction performance of state-of-the-art (SOTA) storage-efficient radiance field models with lower storage allocation.
Researcher Affiliation Academia Seungjoo Shin GSAI, POSTECH seungjoo.shin@postech.ac.kr Jaesik Park CSE & IPAI, Seoul National University jaesik.park@snu.ac.kr
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper provides a project homepage URL (https://seungjooshin.github.io/Bi RF) but does not include an explicit statement about releasing source code or a direct link to a code repository.
Open Datasets Yes We use two synthetic datasets: the Synthetic-Ne RF dataset [1] and the Synthetic-NSVF dataset [13]... We also employ the Tanks and Temples dataset [41]...
Dataset Splits No The paper mentions '100 training views and 200 test views' for the Synthetic-Ne RF and Synthetic-NSVF datasets, and 'training and test views' for Tanks and Temples, but does not explicitly specify a validation split or its size.
Hardware Specification Yes We optimize all our models for 20K iterations on a single GPU (NVIDIA RTX A6000).
Software Dependencies No The paper mentions using 'Instant-NGP [20]', 'Nerf Acc [42]', and 'Adam [43] optimizer' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We use 16 levels of multi-resolution 3D feature grids with resolutions from 16 to 1024, while each grid includes up to T3D feature vectors. We also utilize four levels of multi-resolution 2D feature grids with resolutions from 64 to 512, while each grid includes up to T2D feature vectors... We use the Adam [43] optimizer with an initial learning rate of 0.01, which we decay at 15K and 18K iterations by a factor of 0.33. Furthermore, we adopt a warm-up stage during the first 1K iterations to ensure stable optimization. We set λsparsity = 2.0 10 5 in this work.