Robust Adversarial Objects against Deep Learning Models

Authors: Tzungyu Tsai, Kaichen Yang, Tsung-Yi Ho, Yier Jin954-962

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experimental Results We generate adversarial examples from various 3D models using our proposed algorithm and evaluate the attack results in different scenarios with Point Net++ as the victim network. Additionally, existing defense mechanisms are also tested against our examples. Several adversarial objects are 3D printed, scanned by 3D scanners, and then the resulting point clouds are classified by Point Net++ to demonstrate that our attack remains effective even in physical world.
Researcher Affiliation Academia 1National Tsing Hua University, Hsinchu, Taiwan 2University of Florida, USA
Pseudocode No No structured pseudocode or algorithm blocks found. The methodology is described in text and mathematical formulas.
Open Source Code Yes The source code of this paper is released at https://github. com/jinyier/ai pointnet attack.
Open Datasets Yes Datasets. We use the Model Net40 dataset (Wu et al. 2015) for our experiments, including training, testing the victim models, and generating adversarial examples.
Dataset Splits Yes We use the official splits, where 9,843 examples are used for training, and the remaining 2,468 examples are used for testing.
Hardware Specification Yes The attack algorithm is carried on a server with Intel 9900K CPU, two NVIDIA RTX 2080 Ti graphic cards and 64GB RAM. The physical adversarial objects are 3D printed by FLASHFORGE Creater Pro 3D printer and re-scanned as meshes by Ein Scan-SE 3D scanner.
Software Dependencies No The code is written in Python programming language with Tensor Flow framework. (No specific version numbers provided for Python or TensorFlow.)
Experiment Setup Yes The hyper-parameters we used for attack are: α = 5, β = 3, ℓ = 0.1, κ = 15, which are defined in Equations (1), (2), and (5).