Implicit Geometric Regularization for Learning Shapes

Authors: Amos Gropp, Lior Yariv, Niv Haim, Matan Atzmon, Yaron Lipman

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In practice, we perform experiments with our method, building implicit neural representations from point clouds in 3D and learning collections of shapes directly from raw data. Our method produces state of the art surface approximations, showing significantly more detail and higher fidelity compared to alternative techniques. Our code is available at https://github.com/amosgropp/IGR.
Researcher Affiliation Academia Amos Gropp 1 Lior Yariv 1 Niv Haim 1 Matan Atzmon 1 Yaron Lipman 1 1Department of Computer Science & Applied Mathematics, Weizmann Institute of Science, Rehovot, Israel.
Pseudocode No The paper describes the method and mathematical formulations but does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code Yes Our code is available at https://github.com/amosgropp/IGR.
Open Datasets Yes We evaluated our method on the surface reconstruction benchmark (Berger et al., 2013), using data (input point clouds X, normal data N, and ground truth meshes for evaluation) from (Williams et al., 2019b).
Dataset Splits No The paper states 'random 75%-25% train-test split' and '8 out of 10 humans are used for training and the remaining 2 for testing' but does not explicitly specify a validation dataset split with percentages or counts.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using a multilayer perceptron (MLP) and Automatic Differentiation packages, but does not provide specific software names with version numbers (e.g., 'PyTorch 1.9', 'TensorFlow 2.x').
Experiment Setup Yes For representing shapes we used level sets of MLP f (x; θ); f : R3 Rm R, with 8 layers, each contains 512 hidden units, and a single skip connection from the input to the middle layer as in (Park et al., 2019). The weights θ Rm are initialized using the geometric initialization from (Atzmon & Lipman, 2020). We set our loss parameters (see equation 2) to λ = 0.1, τ = 1.