Manifold Constraints for Imperceptible Adversarial Attacks on Point Clouds

Authors: Keke Tang, Xu He, Weilong Peng, Jianpeng Wu, Yawen Shi, Daizong Liu, Pan Zhou, Wenping Wang, Zhihong Tian

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that integrating manifold constraints into conventional adversarial attack solutions yields superior imperceptibility, outperforming the state-of-the-art methods.
Researcher Affiliation Academia 1Cyberspace Institute of Advanced Technology, Guangzhou University 2School of Computer Science and Cyber Engineering, Guangzhou University 3Wangxuan Institute of Computer Technology, Peking University 4Hubei Engineering Research Center on Big Data Security, School of Cyber Science and Engineering, Huazhong University of Science and Technology 5Department of Computer Science and Engineering, Texas A&M University
Pseudocode No The paper does not contain explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the methodology.
Open Datasets Yes We utilize two public datasets for evaluation: Model Net40 (Wu et al. 2015) and Shape Net Part (Chang et al. 2015).
Dataset Splits No The paper mentions that 2,048 points are randomly sampled from each point cloud, but does not specify the train/validation/test splits for reproduction.
Hardware Specification Yes All experiments are conducted on a workstation with one NVIDIA RTX 3090 GPU.
Software Dependencies No The paper mentions using PyTorch but does not specify its version number or any other software dependencies with version numbers.
Experiment Setup Yes For hyperparameters, we set λ1 = 1.0, λ2 = 0.1, and β = 1.0. We pre-train the invertible auto-encoder under the Chamfer distance constraint for a total of 2000 epochs.