Improving Robustness of 3D Point Cloud Recognition from a Fourier Perspective

Authors: Yibo Miao, Yinpeng Dong, Jinlai Zhang, Lijia Yu, Xiao Yang, Xiao-Shan Gao

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we conducted extensive experiments with various network architectures to validate the effectiveness of FAT, which achieves the new state-of-the-art results.
Researcher Affiliation Collaboration Yibo Miao1,2, Yinpeng Dong3,6 , Jinlai Zhang4, Lijia Yu5, Xiao Yang3, Xiao-Shan Gao1,2 1 KLMM, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China 2 University of Chinese Academy of Sciences, Beijing 100049, China 3 Tsinghua University, Beijing 100084, China 4 Changsha University of Science and Technology, Changsha 410114, China 5 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China 6 Real AI
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes We have provided our codes in the supplemental matrial.
Open Datasets Yes To validate the effectiveness of our FAT method in enhancing the corruption robustness of 3D point cloud recognition models, we train all models on the standard Model Net40 training set [67]. In addition to reporting the performance of the models on the original Model Net40 validation set, we also evaluate the corruption robustness on Model Net-C [41] in the main paper and Model Net40-C [51] in Appendix B.
Dataset Splits Yes In addition to reporting the performance of the models on the original Model Net40 validation set, we also evaluate the corruption robustness on Model Net-C [41] in the main paper and Model Net40-C [51] in Appendix B. The Model Net40 dataset [67] contains 12,311 CAD models with 40 common object categories in the real world. We use the official split [35], where 9,843 examples are used for training and the remaining 2,468 examples are used for testing.
Hardware Specification Yes All of the experiments are conducted on NVIDIA Tesla V100 GPUs.
Software Dependencies No The paper mentions software components like "Adam optimizer" and "smooth cross-entropy loss" but does not specify their version numbers or other specific software dependencies like Python or PyTorch versions.
Experiment Setup Yes For each method, we train 250 epochs using the smooth cross-entropy loss [65] and Adam optimizer [23], and select the best performant model for further evaluation. We follow the DGCNN protocol [16]. For our method, we set k = 30 for the k-nearest neighbor graph and λ = 100 for dividing high-frequency and low-frequency [29]. We use PGD [33] and AOF [27] to generate high-frequency adversarial examples and low-frequency adversarial examples, respectively. We constrain Sh and Sl by 0.3 and 0.5, respectively. For more detailed training settings, please refer to Appendix B.