Adversarial Attack for Asynchronous Event-Based Data

Authors: Wooju Lee, Hyun Myung1237-1244

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our algorithm achieves an attack success rate of 97.95% on the N-Caltech101 dataset. Furthermore, the adversarial training model improves robustness on the adversarial event data compared to the original model.In this section, we first present an extensive evaluation of adversarial attacks for event-based deep learning. We validated our algorithm with various grid representations and kernel functions on the standard event camera benchmark. All the testing results are obtained with an average of three random seeds.
Researcher Affiliation Academia Urban Robotics Lab, School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Republic of Korea
Pseudocode Yes Algorithm 1: Generating Additional Adversarial Events
Open Source Code No The paper does not provide concrete access to source code for the methodology described, such as a repository link or an explicit statement of code release.
Open Datasets Yes We use N-Caltech101 dataset (Orchard et al. 2015) in our evaluation. N-Caltech101 is the event-based version of Caltech101 (Zhao, Dua, and Singh 2004). It was recorded with an ATIS event camera (Posch, Matolin, and Wohlgenannt 2010) on a motor.
Dataset Splits Yes N-Caltech101 consists of 4,356 training samples and 2,612 validating samples in 100 classes.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions the use of 'ADAM optimizer' but does not provide specific version numbers for software dependencies or libraries (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We follow the settings in the original EST model (Gehrig et al. 2019) to train target models: ADAM optimizer (Kingma and Ba 2014) with an initial learning rate of 0.0001 that decays by 0.5 times every 1 epoch; weight decay of 0; the batch normalization momentum (Kingma and Ba 2014) of 0.1. We train the networks for 30 epochs for the event camera dataset.