Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions

Authors: Jiachen Sun, Yulong Cao, Christopher B Choy, Zhiding Yu, Anima Anandkumar, Zhuoqing Morley Mao, Chaowei Xiao

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experimentation, we demonstrate that appropriate applications of self-supervision can significantly enhance the robustness in 3D point cloud recognition, achieving considerable improvements compared to the standard adversarial training baseline.
Researcher Affiliation Collaboration Jiachen Sun 1, Yulong Cao 1, Christopher Choy 2, Zhiding Yu 2, Anima Anandkumar 2,3, Z. Morley Mao 1, and Chaowei Xiao 2,4 1 University of Michigan, 2 NVIDIA, 3 Caltech, 4 ASU
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about open-source code availability or a link to a code repository for the methodology described.
Open Datasets Yes We leverage four datasets (D): Model Net40 [30] (40 classes), Model Net10 [30] (10 classes), Scan Object NN [41] (15 classes), and Shape Net Part [42] throughout our experiments.
Dataset Splits No We follow the default split of training and test sets in [5] and [43].
Hardware Specification Yes All experiments are done on 1 to 4 NVIDIA V100 GPUs [45].
Software Dependencies No The paper mentions using Adam [44] for optimization but does not list other specific software dependencies with version numbers (e.g., PyTorch, CUDA versions).
Experiment Setup Yes We use batch sizes of 32 for Point Net and DGCNN, and 128 for PCT. The initial learning rate is set to 0.001 for Point Net and DGCNN, and 5 10 4 for PCT. Both pre-training and fine-tuning take 250 epochs, where a 10 decay happens at the 100-th, 150-th, and 200-th epoch.