SADA: Semantic Adversarial Diagnostic Attacks for Autonomous Applications
Authors: Abdullah Hamdi, Matthias Mueller, Bernard Ghanem10901-10908
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply BBGAN on three different tasks, primarily targeting aspects of autonomous navigation: object detection, self-driving, and autonomous UAV racing. On these tasks, BBGAN can generate failure cases that consistently fool a trained agent. and Table 2: Attack Fooling Rate (AFR) Comparison: AFR of adversarial samples generated on three safety-critical applications: YOLOV3 object detection, self-driving, and UAV racing. |
| Researcher Affiliation | Academia | Abdullah Hamdi, Matthias M uller, Bernard Ghanem King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia {abdullah.hamdi, matthias.mueller.2, Bernard.Ghanem}@kaust.edu.sa |
| Pseudocode | Yes | Algorithm 1: Generic Adversarial Attacks on Agents |
| Open Source Code | No | The paper discusses the use of open-source software like Blender, CARLA, and Sim4CV, but does not provide a statement or link for the open-sourcing of their own method's code (BBGAN). |
| Open Datasets | Yes | The 3D collection consists of 100 shapes of 12 object classes (aeroplane, bench, bicycle, boat, bottle, bus, car, chair, dining table, motorbike, train, truck) from Pascal3D (Xiang, Mottaghi, and Savarese 2014) and Shape Net (Chang et al. 2015). and The environment used is CARLA driving simulator (Dosovitskiy et al. 2017), the most realistic open-source urban driving simulator currently available. and We use the general-purpose simulator for computer vision applications, Sim4CV (M uller et al. 2018b). |
| Dataset Splits | No | The paper describes training the BBGAN with an 'induced set' and evaluates performance using an 'Attack Fooling Rate (AFR)' on a test set, but does not explicitly define or specify a separate validation dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory used for running the experiments. |
| Software Dependencies | No | The paper mentions software such as YOLOv3, Blender (Blender Online Community 2018), CARLA (Dosovitskiy et al. 2017), and Sim4CV (M uller et al. 2018b), but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | For object detection, we use eight parameters that have shown to affect detection performance and frequently occur in real setups (refer to Figure 4). and For object detection, we use N = 20000 image renderings for each class (a total of 240K images). Due to the computational cost, our dataset for the autonomous navigation tasks comprises only N = 1000 samples. The induced set size is always fixed to be s = 100. and Both, the generator G and the discriminator D consist of a MLP with 2 layers. and To determine whether the agent is fooled, we use a fooling rate threshold ϵ = 0.3 (Chen et al. 2018), ϵ = 0.6, and ϵ = 0.7 for object detection, self-driving, and UAV racing, respectively. |