MetaSDF: Meta-Learning Signed Distance Functions

Authors: Vincent Sitzmann, Eric Chan, Richard Tucker, Noah Snavely, Gordon Wetzstein

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We study properties of different generalization methods on 2D signed distance functions (SDFs) extracted from the MNIST dataset. From every MNIST digit, we extract a 2D SDF via a distance transform, such that the contour of the digit is the zero-level set of the corresponding SDF, see Fig. 2. Following [1], we directly fit the SDF of the MNIST digit via a fully connected neural network. We benchmark three alternative generalization approaches.
Researcher Affiliation Collaboration Vincent Sitzmann Stanford University sitzmann@cs.stanford.edu Eric R. Chan Stanford University erchan@cs.stanford.edu Richard Tucker Google Research richardt@google.com Noah Snavely Google Research snavely@google.com Gordon Wetzstein Stanford University gordon.wetzstein@stanford.edu
Pseudocode Yes Algorithm 1 Meta SDF: Gradient-based meta-learning of shape spaces
Open Source Code Yes All code and datasets will be made publicly available.
Open Datasets Yes We study properties of different generalization methods on 2D signed distance functions (SDFs) extracted from the MNIST dataset. From every MNIST digit, we extract a 2D SDF via a distance transform, such that the contour of the digit is the zero-level set of the corresponding SDF, see Fig. 2. Following [1], we directly fit the SDF of the MNIST digit via a fully connected neural network. We benchmark three alternative generalization approaches.
Dataset Splits Yes We train all models on SDFs of the full MNIST training set, providing supervision via a regular grid of 64 64 ground-truth SDF samples. For CNPs and the proposed approach, we train two models each, conditioned on either (1) the same 64 64 ground-truth SDF samples or (2) a set of 512 points sampled from the zero-level set. We then test all models to reconstruct SDFs from the unseen MNIST test set from these two different kinds of context.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments.
Software Dependencies No The paper mentions 'ADAM optimizer [40]' but does not provide specific version numbers for software dependencies like Python, PyTorch/TensorFlow, or CUDA.
Experiment Setup Yes All models are implemented as fully connected Re LU-MLPs with 256 hidden units and no normalization layers. Φ is implemented with four layers. The set encoder of CNPs similarly uses four layers. Hypernetworks are implemented with three layers as in [8]. The proposed approach performs 5 inner-loop update steps, where we initialize α as 1 10 1. All models are optimized using the ADAM optimizer [40] with a learning rate of 1 10 4.