Adversarial Regression for Detecting Attacks in Cyber-Physical Systems

Authors: Amin Ghafouri, Yevgeniy Vorobeychik, Xenofon Koutsoukos

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that (a) the stealthy attacks we develop are extremely effective, and (b) our resilient detector significantly reduces the impact of a stealthy attack without appreciably increasing the false alarm rate.
Researcher Affiliation Collaboration Amin Ghafouri1, Yevgeniy Vorobeychik2, and Xenofon Koutsoukos2 1Cruise Automation, San Francisco, CA 2Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN aminghafouri.ut@gmail.com,{yevgeniy.vorobeychik,xenofon.koutsoukos}@vanderbilt.edu
Pseudocode Yes Algorithm 1 Adversarial Regression for Neural Network and Algorithm 2 Resilient Detector
Open Source Code No The paper does not contain any explicit statement about releasing source code for the described methodology, nor does it provide any links to a code repository.
Open Datasets Yes We evaluate our contributions using a case study of the wellknown Tennessee-Eastman process control system (TE-PCS) and We use the revised Simulink model of TE-PCS [Bathelt et al., 2015].
Dataset Splits No Note that since the data is sequential, the train and test data cannot be randomly sampled and instead, we divide the data in two blocks. (This mentions train and test, but not a separate validation set nor specific split percentages/counts.)
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No We trained the networks in Tensorflow for 5000 epochs using Adam optimizer with β1 = 0.9, β2 = 0.999, and ϵ = 10 8, and a learning rate of 0.01. (Mentions Tensorflow but no version number or other software dependencies with versions.)
Experiment Setup Yes We considered neural networks with 2 to 4 hidden layers and 10 to 20 neurons in each layer. All the neurons in the hidden layers use tanh activation functions. We also experimented with Re LU activation functions but tanh performs better. We trained the networks in Tensorflow for 5000 epochs using Adam optimizer with β1 = 0.9, β2 = 0.999, and ϵ = 10 8, and a learning rate of 0.01.