NeuralGF: Unsupervised Point Normal Estimation by Learning Neural Gradient Function
Authors: Qing Li, Huifang Feng, Kanle Shi, Yue Gao, Yi Fang, Yu-Shen Liu, Zhizhong Han
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our excellent results on widely used benchmarks demonstrate that our method can learn more accurate normals for both unoriented and oriented normal estimation tasks than the latest methods. |
| Researcher Affiliation | Collaboration | Qing Li1 Huifang Feng2 Kanle Shi3 Yue Gao1 Yi Fang4 Yu-Shen Liu1 Zhizhong Han5 1School of Software, Tsinghua University, Beijing, China 2School of Informatics, Xiamen University, Xiamen, China 3Kuaishou Technology, Beijing, China 4Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi, UAE 5Department of Computer Science, Wayne State University, Detroit, USA |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code and pre-trained model are available at https://github.com/Leo QLi/Neural GF. |
| Open Datasets | Yes | We report quantitative evaluation results on datasets PCPNet and Famous Shape [39]. |
| Dataset Splits | No | The paper mentions selecting NQ and NG points for input, but does not provide specific training, validation, or test dataset splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using "PyTorch [58]" but does not specify a version number for it or any other software dependency. |
| Experiment Setup | Yes | It is composed of eight linear layers and a skip connection from the input to the intermediate layer. Except for the last layer, each linear layer contains 512 hidden units and is equipped with a Re LU activation function. The last layer outputs the signed distance for each point. The parameters of the linear layer are initialized using the geometric initialization [6]. We set l to 25 for the PCPNet dataset and 50 for the Famous Shape dataset. We choose the neighborhood scale set {Ks}NK s=1 as {1, K/2, K}, K = 8, and set the parameter ρ of adaptive weight to 60. We select NQ = 5000 points from the distribution D and NG =2500 points from P to form the input {Q, G}. The number of steps is set to I =2 for all evaluations. where λ1 = 0.01, λ2 = 0.1 and λ3 = 10 are balance weights and are fixed across all experiments. |