HyP-NeRF: Learning Improved NeRF Priors using a HyperNetwork

Authors: Bipasha Sen, Gaurav Singh, Aditya Agarwal, Rohith Agaram, Madhava Krishna, Srinath Sridhar

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide qualitative comparisons and evaluate Hy P-Ne RF on three tasks: generalization, compression, and retrieval, demonstrating our state-of-the-art results.
Researcher Affiliation Academia Bipasha Sen MIT CSAIL bise@mit.edu Gaurav Singh IIIT, Hyderabad gaurav.si Aditya Agarwal MIT CSAIL adityaag@mit.edu Rohith Agaram IIIT, Hyderabad rohith.agaram K Madhava Krishna IIIT, Hyderabad mkrishna@iiit.ac.in Srinath Sridhar Brown University srinath@brown.edu
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks. It provides mathematical equations and architectural diagrams.
Open Source Code No The paper mentions a project page 'hyp-nerf.github.io' in a footnote, which typically showcases results but does not guarantee the release of source code. It also cites '[1] Git Hub ashawkey/torch-ngp', which is a third-party implementation of Instant NGP used as a component, not the authors' full Hy P-Ne RF source code. There is no explicit statement about their own source code being released.
Open Datasets Yes We primarily compare against two baselines, Pixel Ne RF [75] and Instant NGP [35] on the Amazon-Berkeley Objects (ABO) [11] dataset. [...] Additionally, we compare with the other baselines on SRN at 128 128 resolution qualitatively in the main paper (Figure 5) and quantitatively in the supplementary.
Dataset Splits No The paper mentions a 'training dataset' and uses 'novel Ne RF instances' for generalization testing, but it does not specify a distinct validation set or its size/percentage for hyperparameter tuning or model selection during training.
Hardware Specification Yes We perform all of our experiments on NVIDIA RTX 2080Tis.
Software Dependencies No The paper mentions using 'Instant NGP' and 'VQVAE2 [42] as the backbone' but does not specify any version numbers for these software dependencies or any other libraries.
Experiment Setup Yes We use Instant NGP as f( )n, with 16 levels, hashtable size of 211, feature dimension of 2, and linear interpolation for computing the MRHE; the MLP has a total of 5, 64-dimensional, layers. Our hypernetwork, M, consists of 6 MLPs, 1 for predicting the MRHE, and the rest predicts the parameters ϕ for each of the MLP layers of f. Each of the MLPs are made of 3, 512-dimensional, layers.