Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations

Authors: Vincent Sitzmann, Michael Zollhoefer, Gordon Wetzstein

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We train SRNs on several object classes and evaluate them for novel view synthesis and few-shot reconstruction. We further demonstrate the discovery of a non-rigid face model.
Researcher Affiliation Academia Vincent Sitzmann Michael Zollhöfer Gordon Wetzstein {sitzmann, zollhoefer}@cs.stanford.edu, gordon.wetzstein@stanford.edu Stanford University
Pseudocode Yes Algorithm 1 Differentiable Ray-Marching
Open Source Code Yes Code and datasets are available.
Open Datasets Yes Shapenet v2. We consider the chair and car classes of Shapenet v.2 [39] with 4.5k and 2.5k model instances respectively.
Dataset Splits No The paper explicitly defines training and test sets (e.g., 'training set', 'held-out test set', '50 images (training set)'). However, it does not explicitly state the use of a validation set or specific splits for validation, nor does it provide exact split percentages or counts for training and testing beyond '50 images of each object' for training and '100 objects from a held-out test set'.
Hardware Specification No A single forward pass takes around 120 ms and 3 GB of GPU memory per batch item.
Software Dependencies No The paper mentions 'Hyperparameters, computational complexity, and full network architectures for SRNs and all baselines are in the supplement.', but it does not specify software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8').
Experiment Setup Yes Hyperparameters, computational complexity, and full network architectures for SRNs and all baselines are in the supplement.