NeuForm: Adaptive Overfitting for Neural Shape Editing

Authors: Connor Lin, Niloy Mitra, Gordon Wetzstein, Leonidas J. Guibas, Paul Guerrero

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate NEUFORM on multiple applications: (i) reconstruction (i.e., projecting a given input to an adaptive overfitted latent space); (ii) part based shape editing; and (iii) shape mixing (i.e., converting an arrangement of parts taken from different models into a coherent shape model). We compare with two state-of-the-art approaches [16, 39] and demonstrate advantages, both quantitatively and qualitatively.
Researcher Affiliation Collaboration Connor Z. Lin Stanford University Niloy J. Mitra Adobe / UCL Gordon Wetzstein Stanford University Leonidas Guibas Stanford University Paul Guerrero Adobe
Pseudocode No The paper describes methods using mathematical equations and textual explanations, but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No If accepted, we plan to release code, including instruction how to reproduce the results, one or two months after the notification.
Open Datasets Yes Dataset. We use the Part Net [25] dataset for our experiments.[25] Kaichun Mo, Shilin Zhu, Angel X. Chang, Li Yi, Subarna Tripathi, Leonidas J. Guibas, and Hao Su. Part Net: A large-scale benchmark for fine-grained and hierarchical part-level 3D object understanding. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
Dataset Splits No The paper states 'training/test split of 6000/1800, 2100/400, and 3500/500 for chairs, lamps, and tables, respectively,' but does not explicitly provide details for a separate validation split.
Hardware Specification Yes Training the generalizable model takes roughly 33 hours on a Titan Xp GPU and training the overfitted model takes roughly 25 minutes on a single V100 GPU.
Software Dependencies No The paper mentions using the Adam optimizer, but does not provide specific version numbers for software libraries or dependencies like PyTorch, TensorFlow, or Python.
Experiment Setup Yes Training details. We train the generalizable model for 1000 epochs using the Adam [17] optimizer with a learning rate of 1e 4 and an exponential learning rate decay of 0.994 per epoch. In each epoch, we train on 4096 query points per shape with a batchsize of 1 shape. We sample 12.5% of the points uniformly in the [ 1, 1] cube and 87.5% of the points around the surface with a Guassian offset (N(0, 0.05)). The overfitted model is trained for 100 epochs on a single shape using the same training setup.