Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms

Authors: Linbo Liu, Youngsuk Park, Trong Nghia Hoang, Hilaf Hasson, Luke Huan

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets confirm that our attack schemes are powerful and our defense algorithms are more effective compared with baseline defense mechanisms. We conduct numerical experiments to demonstrate the effectiveness of our proposed indirect sparse attack on a multivariate probabilistic forecasting models and compare various defense mechanisms.
Researcher Affiliation Collaboration Linbo Liu1 , Youngsuk Park1, Trong Nghia Hoang2 , Hilaf Hasson1, Jun Huan1 1AWS AI Labs, 2Washington State University, {linbol, pyoungsu, hashilaf, lukehuan}@amazon.com, trongnghia.hoang@wsu.edu
Pseudocode Yes Algorithm 1 Deterministic Adversarial Attack; Algorithm 2 Randomized Smoothing; Algorithm 3 Mini-max Defense
Open Source Code Yes The code to reproduce our experiments results can be found at https://github.com/awslabs/gluonts/tree/dev/src/gluonts/ nursery/robust-mts-attack.
Open Datasets Yes We include Traffic (Asuncion & Newman, 2007), Electricity (Asuncion & Newman, 2007), Taxi (Taxi & Commission, 2015), Wiki (Lai, 2017). See Appendix B.1 for more information.
Dataset Splits No For the noise level σ in DA and RS, we select them via a validation set and it turns out no σ is uniformly better than the others across different sparsity level. Thus, σ = 0.1 is chosen in the empirical evaluation. (While it mentions 'validation set', it doesn't give the split percentages or sizes, which is required for reproducibility.)
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments, such as specific CPU or GPU models, or cloud computing instance types.
Software Dependencies No The paper mentions 'pytorch-ts (Rasul, 2021)' as the implementation framework, but does not provide specific version numbers for software dependencies like PyTorch, Python, or other libraries used in the experiments.
Experiment Setup Yes We choose prediction length τ = 24 and context length T = 4τ = 96, and sparsity level κ = 1, 3, 5, 7, 9. For all experiments, we train a Deep VAR with rank 5. The attack energy η = c1 max |x|, is proportional to the largest element of the past observation in magnitude, where c1 is set to 0.5. For the adversarial target tadv, we first draw a prediction ˆx from un-attacked model pθ( |x) and choose tadv = c2ˆx for constants c2 = 0.5 and 2.0. Unless otherwise stated, the number of sample paths drawn from the prediction distribution n = 100 to quantify quantiles q(α) i,t . In mini-max defense, the sparsity level of the sparse layer is set to 5 for all cases. For the noise level σ in DA and RS, we select them via a validation set and it turns out no σ is uniformly better than the others across different sparsity level. Thus, σ = 0.1 is chosen in the empirical evaluation.