Online Evasion Attacks on Recurrent Models:The Power of Hallucinating the Future

Authors: Byunggill Joe, Insik Shin, Jihun Hamm

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the effectiveness of the proposed framework and attacks through various experiments. We evaluate our attacks using six datasets. Our predictive attack approaches 98% of the performance of the clairvoyant on average. We perform further empirical analysis of the predictive attacks demonstrating the versatility of our framework and attack robustness. 5 Experiment We evaluate our attacks to answer the following research questions.
Researcher Affiliation Academia Byunggill Joe1 , Insik Shin1 and Jihun Hamm2 1 School of Computing, KAIST, Daejeon, South Korea 2 Department of Computer Science, Tulane University, Louisiana, USA {byunggill.joe, insik.shin}@kaist.ac.kr, jhamm3@tulane.edu
Pseudocode Yes Algorithm 1 Predictive Attack at time t.
Open Source Code Yes 1https://github.com/byunggilljoe/rnn online evasion attack. Appendix is included.
Open Datasets Yes MNIST [Le Cun et al., 1998]: Fashion MNIST [Xiao et al., 2017]: Mortality [Harutyunyan et al., 2019]: User [Casale, 2014]: Udacity [Gonzalez et al., 2017]: Energy [Candanedo et al., 2017]:
Dataset Splits No The paper lists datasets used in experiments but does not provide specific training/validation/test split percentages, sample counts, or refer to predefined splits for reproducibility.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, cloud instances) used for running its experiments.
Software Dependencies No The paper mentions the 'Adam optimizer' but does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup Yes All models, except for Udacity, consist of one LSTM layer followed by two linear layers with ReLU activations. For Udacity, we use CNN-LSTM as a victim model, and Crev Net [Yu et al., 2020] as Qϕ to deal with the high-dimensional images. More model details are in Appendix B.We the Adam optimizer for training with a learning rate of 1e-4. ... We set MAX ITERS = 100, and α = 1.5ϵ/MAX ITERS.