Generating Transferable 3D Adversarial Point Cloud via Random Perturbation Factorization

Authors: Bangyan He, Jian Liu, Yiming Li, Siyuan Liang, Jingzhi Li, Xiaojun Jia, Xiaochun Cao

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on benchmark dataset, verifying our method s effectiveness in increasing transferability while preserving high efficiency. 1
Researcher Affiliation Collaboration Bangyan He1, 2, Jian Liu3, Yiming Li4, Siyuan Liang1, 2, Jingzhi Li1, 2, *, Xiaojun Jia1, 2, *, Xiaochun Cao5 1SKLOIS, Institute of Information Engineering, CAS, Beijing, China 2School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 3Ant Group, Beijing, China 4Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China 5School of Cyber Science and Technology, Shenzhen Campus, Sun Yat-sen University, Shenzhen, China
Pseudocode Yes Algorithm 1: The Main Process of Our PF-Attack.
Open Source Code Yes The code is available on https://github.com/He Bang Yan/PFAttack
Open Datasets Yes We used Model Net40 (Wu et al. 2015), a widely used dataset, to train the model and evaluate the performance of each attack method.
Dataset Splits No Model Net40 has a total of 12,311 CAD models containing 40 different object categories, of which 9,843 samples were used for training and 2,468 for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for running its experiments.
Software Dependencies No The paper mentions using "Adam optimizer" but does not specify any software libraries or dependencies with version numbers.
Experiment Setup Yes The hyper-parameters of the PF-Attack were set to: η = 0.01, τ = 10, β = 0.5, p = 0.5, T = 200, ϵ {0.18, 0.45}. We use Adam optimizer (Kingma and Ba 2015).