Neural Implicit Shape Editing using Boundary Sensitivity
Authors: Arturs Berzins, Moritz Ibing, Leif Kobbelt
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate several examples of geometric editing in Figure 1 where we displace parts of both man-made and organic neural implicit shapes. In addition, we quantify and plot the relative geometric error between the computed shape and prescribed target normalized by the largest target displacement (δxn δ xn)/ maxx Γ |δ xn|. In Appendix A, we repeat the same set of experiments with the Dual SDF (Hao et al., 2020) architecture. |
| Researcher Affiliation | Collaboration | Arturs Berzins Department of Mathematics and Cybernetics SINTEF arturs.berzins@sintef.no Moritz Ibing & Leif Kobbelt Visual Computing Institute RWTH Aachen University |
| Pseudocode | No | The paper does not contain any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | No | The paper references third-party pretrained models used in the experiments (e.g., 'pretrained model available at https://github.com/czq142857/ IM-NET-pytorch' and 'pretrained models on planes and chairs available at https://github.com/zekunhao1995/Dual SDF'), but it does not provide access to its own implementation code. |
| Open Datasets | Yes | The decoder is trained as part of an auto-encoder reconstructing the entire Shape Net dataset (Chang et al., 2015) from l R256 latent variables. In Appendix A, we repeat the same set of experiments with the Dual SDF (Hao et al., 2020) architecture. Each model is trained on a single Shape Net category. |
| Dataset Splits | No | The paper mentions training models on the ShapeNet dataset and uses pretrained models, but it does not explicitly describe the dataset splits (e.g., training, validation, test percentages or counts) used for its own experiments. |
| Hardware Specification | No | The paper describes the network architecture and number of parameters ('3 hidden layers of 32 neurons each with sin activations. In total, there are P = 2273 learnable parameters'), but it does not specify any hardware components (e.g., CPU, GPU models) used for running the experiments. |
| Software Dependencies | No | The paper mentions that the pretrained IM-Net model is 'implemented as an MLP' and references 'IM-NET-pytorch', implying the use of PyTorch, but it does not specify any software versions or other dependencies required for reproducibility. |
| Experiment Setup | Yes | In all cases, we include Tikhonov regularization with λ = 0.1 (see Appendix B). All networks share the same architecture: 3 hidden layers of 32 neurons each with sin activations. We sample roughly 100 points in these areas, prescribe the same target vector at each point, and leave the remaining boundary unconstrained. After projecting the target onto the current normal and finding the best fit parameter update according to Equation 5, we repeat this process for a few (< 15) iterations to achieve visually obvious changes. |