NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in the Wild

Authors: Jason Zhang, Gengshan Yang, Shubham Tulsiani, Deva Ramanan

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, rather than illustrating our results on synthetic scenes or controlled in-the-lab capture, we assemble a novel dataset of multi-view images from online marketplaces for selling goods. Such in-the-wild multi-view image sets pose a number of challenges, including a small number of views with unknown/rough camera estimates. We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions.
Researcher Affiliation Academia Jason Y. Zhang Gengshan Yang Shubham Tulsiani Deva Ramanan Robotics Institute, Carnegie Mellon University
Pseudocode No The paper describes its algorithms and models using mathematical equations and textual explanations, but it does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes The project page with code and video visualizations can be found at jasonyzhang.com/ners.
Open Datasets Yes To address the shortage of in-the-wild multi-view datasets, we introduce a new dataset, Multi-view Marketplace Cars (MVMC), collected from an online marketplace with thousands of car listings. [...] The filtered dataset with anonymized personally identifiable information (e.g. license plates and phone numbers), masks, initial camera poses, and optimized Ne RS cameras will be made available on the project page.
Dataset Splits No The paper mentions training data and an evaluation set, but does not explicitly describe a separate validation set for hyperparameter tuning or model selection. For evaluation, it describes using one image-camera pair as a target and the rest for training within the evaluation set.
Hardware Specification Yes With 4 Nvidia 1080TI GPUs, training Ne RS requires approximately 30 minutes.
Software Dependencies No The paper mentions "Py Torch3D s differentiable renderer" but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We optimize (7) in a coarse-to-fine fashion, starting with a few parameters and slowly increasing the number of free parameters. We initially optimize (7), w.r.t only the camera parameters Πi. After convergence, we sequentially optimize fshape, ftex, and fenv/α/ks. [...] See Sec. ?? for hyperparameters and additional details.