NeAF: Learning Neural Angle Fields for Point Normal Estimation

Authors: Shujuan Li, Junsheng Zhou, Baorui Ma, Yu-Shen Liu, Zhizhong Han

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results with synthetic data and real scans show significant improvements over the state-of-the-art under widely used benchmarks. Project page: https://lisj575.github.io/NeAF/. and Experimental Results Evaluation on Synthetic Dataset Dataset and metrics. For the experiments on synthetic shapes, we adopt the PCPNet dataset provided by Guerrero et al. (2018).
Researcher Affiliation Academia 1School of Software, BNRist, Tsinghua University, Beijing, China 2Department of Computer Science, Wayne State University, Detroit, USA
Pseudocode No The paper describes algorithmic steps in text but does not include formal pseudocode or algorithm blocks.
Open Source Code Yes Project page: https://lisj575.github.io/NeAF/.
Open Datasets Yes For the experiments on synthetic shapes, we adopt the PCPNet dataset provided by Guerrero et al. (2018). and The Scene NN (Hua et al. 2016) dataset provides indoor scenes in the form of reconstructed meshes.
Dataset Splits Yes We use the same train/test settings and data augmentation strategies. PCPNet samples 100k points on the mesh of each shape to obtain a point cloud. The training set contains 8 shapes, and each shape includes a noise-free point cloud and three point clouds containing Gaussian noise with a standard deviation of 0.125% (Low), 0.65% (Med) and 1.2% (High) of the length of the bounding box diagonal of the shape. In addition to the three noise variants, two additional point clouds with varying densities (Stripes and Gradients) are added to the test set. and We randomly select 40% of all points to calculate RMSE.
Hardware Specification Yes The model is trained on 2 GTX 1080Ti.
Software Dependencies No The paper mentions 'Adam optimizer' and 'Point Net' architecture, but does not provide specific version numbers for software libraries or dependencies, such as Python, PyTorch, or TensorFlow versions.
Experiment Setup Yes For generating training data, we adopt the method proposed by Muller (1959) to randomly and uniformly sample M = 5000 query vectors in the unit spherical space for training... As for inference data, we use the same method to sample m = 10000 query vectors for extracting coarse normals. During training, we randomly select 400 query vectors from the training set as a batch to train the network. At inference time, we select to extract l = 10 coarse normals and optimize them simultaneously in 5 epochs for coarse normal refinement. We use the Adam optimizer with an initial learning rate of 1e-3, and adopt a cosine learning rate decay strategy with warmup. The model is trained on 2 GTX 1080Ti. In coarse normal refinement, we use the Adam optimizer with an initial learning rate of 0.005.