Local Surface Descriptor for Geometry and Feature Preserved Mesh Denoising

Authors: Wenbo Zhao, Xianming Liu, Junjun Jiang, Debin Zhao, Ge Li, Xiangyang Ji3446-3453

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The extensive experimental results show that, compared to the state-of-the-arts, our method achieves encouraging performance with respect to both objective and subjective evaluations. We provide extensive experimental comparison with the state-of-the-art methods on multiple datasets to show that our scheme achieves the best mesh denoising performance so far with respect to average objective metric and subjective metric, which demonstrates the power of LSD.
Researcher Affiliation Academia Wenbo Zhao1,2, Xianming Liu1,2*, Junjun Jiang1,2, Debin Zhao1,2, Ge Li3, Xiangyang Ji4 1Faculty of Computing, Harbin Institute of Technology, Harbin, China 2Peng Cheng Laboratory, Shenzhen, China 3School of Electronic and Computer Engineering, Peking University Shenzhen Graduate School, Shenzhen, China 4Department of Automation and BNRist, Tsinghua University, Beijing, China
Pseudocode No The paper describes the steps for LSD generation and the overall pipeline of LSD-net using textual descriptions and figures, but it does not include any formal pseudocode blocks or algorithm listings.
Open Source Code No The paper states: 'The source codes and per-trained models of CNR, FGC, GNF and NLF are kindly released by their authors or implemented by a third party.' This refers to the code of comparison methods, not the authors' own method (LSD-net). No statement of their own code release is found.
Open Datasets Yes We separately train three series of RESNET(k) corresponding to the training sets released by (Wang, Liu, and Tong 2016), including the Synthetic set (60 meshes), Kinect V1 set (72 meshes) and Kinect V2 set (72 meshes).
Dataset Splits No The paper mentions 'training sets' and 'test meshes/datasets' but does not explicitly provide details about how these datasets are split into distinct training, validation, and testing subsets with specified percentages or counts within a single dataset. It appears to use predefined training and testing datasets from prior work rather than performing new splits.
Hardware Specification Yes All the experiments are conducted on a server with two Tesla V100 GPUs.
Software Dependencies No The paper mentions using 'the classical Resnet' and 'Adam optimizer' but does not specify version numbers for any software libraries (e.g., PyTorch, TensorFlow) or programming languages (e.g., Python) used for implementation.
Experiment Setup Yes Network Training. ... Adam optimizer (β1 = 0.9, β2 = 0.999, learning rate = 0.0001, batch size = 80). Parameters Setting. For the parameters of generating LSD, we set ts = 40 and αc = 8. The direction of nt can be arbitrarily chosen, so we set nt = (1, 0, 0). The parameters of iteration are individually set for different datasets to achieve better results, which are shown in Table 1. Dataset Kinect V1 Kinect V2 Synthetic Scanned Nf 4 3 2 2 Nv 20 20 20 20