Geometry-Consistent Neural Shape Representation with Implicit Displacement Fields
Authors: Wang Yifan, Lukas Rahmann, Olga Sorkine-hornung
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Systematic evaluations show that our approach is significantly more powerful in representing geometric details, while being lightweight and highly stable in training. We compare our approaches with 5 baseline methods. Quantitative comparison. Among the benchmarking methods, only NGLOD at LOD-6, using 256 number of parameters compared to our model, can yield results close to ours. SIREN models with larger ω have convergence issues: despite our best efforts, the models still diverged in most cases. Please refer to Table 4 for a more comprehensive evaluation. |
| Researcher Affiliation | Academia | Wang Yifan ETH Zurich ywang@inf.ethz.ch Lukas Rahmann ETH Zurich lukas.rahmanna@gmail.com Olga Sorkine-Hornung ETH Zurich sorkine@inf.ethz.ch |
| Pseudocode | No | The paper describes its methods in prose and with diagrams, but it does not include any formal pseudocode blocks or algorithm listings. |
| Open Source Code | Yes | Code and data available at: https: //github.com/yifita/idf |
| Open Datasets | Yes | We test our method using 16 high-resolution shapes, including 14 from Sketchfab (ske, 2021) and 2 from Stanford 3DScan Repo (sta, 2021). Our transferable displacement model is tested using shapes provided by Berkiten et al. (2017), Yang et al. (2020), and Zhou & Jacobson (2016). |
| Dataset Splits | No | The paper mentions a "training percentile Tm [0, 1]" as part of a progressive learning scheme and references "validation" as a section title for evaluating transferability, but it does not specify a distinct validation dataset split with percentages or counts for its main experiments. |
| Hardware Specification | Yes | All benchmarking is performed on a single Nvidia 2080 RTX GPU. |
| Software Dependencies | No | The paper mentions software components like "ADAM optimizer" and "cosine annealing (Loshchilov & Hutter, 2016)" but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | The maximal displacement α, attenuation factors ν, and the switching training percentile is set to Tm are set to 0.05, 0.02 and 0.2 respectively; The loss weights λ{0,1,2,3} in equation 4 are set to 5, 400, 40 and 50. We train our models for 120 epochs using ADAM optimizer with initial learning rate of 0.0001 and decay to 0.00001 using cosine annealing (Loshchilov & Hutter, 2016) after finishing 80% of the training epochs. Each training iteration uses 4096 subsampled surface points and 4096 offsurface points uniformly sampled from the [ 1, 1]3 bounding box. |