NeuroGF: A Neural Representation for Fast Geodesic Distance and Path Queries

Authors: Qijian Zhang, Junhui Hou, Yohanes Adikusuma, Wenping Wang, Ying He

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations on common 3D object models and real-captured scene-level meshes demonstrate our exceptional performances in terms of representation accuracy and querying efficiency. We performed quantitative evaluations on geodesic distances produced from different approaches, as compared in Table 1, where our Neuro GF achieves the lowest mean relative errors for most testing shapes. We also visualized geodesic distance fields using isolines in Figure 4, which shows the smoothness of our results. We further compared the time efficiency of different approaches (including exact VTP [32] for ground truth geodesic distances and paths). We made further efforts to extend the overfitting working mode of Neuro GFs introduced previously to generalizable learning frameworks. Here, we further investigated the effects of scaling the network size by changing De to 64, 128, and 512, leading to three different variants with 35K and 84K, and 916K parameters, respectively. Furthermore, to reveal the specific influences of the different learning components and supervision objectives involved in our approach, we performed necessary ablative analyses as presented in Table 4. We experimented with a much denser version of the dragon model with 1.5M vertices and another classic graphics model lucy with 6.9M vertices.
Researcher Affiliation Academia Qijian Zhang1, Junhui Hou1 , Yohanes Yudhi Adikusuma2, Wenping Wang3, Ying He2 1Department of Computer Science, City University of Hong Kong, Hong Kong SAR, China 2School of Computer Science and Engineering, Nanyang Technological University, Singapore 3Department of Computer Science and Engineering, Texas A&M University, Texas, USA
Pseudocode No The paper describes the architectural design and learning objectives in detail but does not contain a formal pseudocode block or algorithm structure.
Open Source Code Yes Our code and data are available at https://github.com/keeganhk/Neuro GF.
Open Datasets Yes We used the popular Shape Net [7] mesh dataset pre-processed by [42], covering 13 different shape categories. We collected 3000 models from 8 categories as our training set. For each model, we only sparsely generated 2K ground-truth training pairs.
Dataset Splits No The paper describes training and testing sets, but does not explicitly mention a 'validation' split with specific percentages, counts, or a standard citation for validation data partitioning. It focuses on training and then evaluating on various testing sets (category-specific, seen categories, unseen categories).
Hardware Specification Yes We further compared the time efficiency of different approaches (including exact VTP [32] for ground truth geodesic distances and paths), where [32, 10, 2] run on the CPU (Intel i5 7500), while Neuro GF runs on the GPU (NVIDIA Ge Force RTX 3090).
Software Dependencies No The paper mentions using "Adam W [20] optimizer" and that the code is in Python, but it does not specify version numbers for Python, any deep learning frameworks (like PyTorch or TensorFlow), or other relevant libraries and their versions.
Experiment Setup Yes We adopted the popular Adam W [20] optimizer for parameter updating with 500 training epochs, with the learning rate gradually decaying from 0.01 to 0.0001 scheduled by cosine annealing. During each training epoch, we randomly sampled around 30K spatial points as signed distance queries, 90K paired mesh vertices as geodesic distance queries, and 20K paired mesh vertices as shortest path queries, which are repeatedly consumed as inputs for 200 iterations. In the whole training phase, we specified the number of geodesic curve points as M = 128 and M = 32 for long and short shortest paths, respectively, to facilitate batch-wise processing. We adopted De = 256 for constructing our baseline representation model, which totally contains 259K network parameters.