Noisy Recurrent Neural Networks

Authors: Soon Hoe Lim, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our theory is supported by empirical results which demonstrate that the RNNs have improved robustness with respect to various input perturbations. We demonstrate via empirical experiments on benchmark data sets that NRNN classifiers are more robust to data perturbations when compared to other recurrent models, while retaining state-of-the-art performance for clean data.
Researcher Affiliation Academia Soon Hoe Lim Nordita, KTH Royal Institute of Technology and Stockholm University soon.hoe.lim@su.se N. Benjamin Erichson University of Pittsburgh School of Engineering erichson@pitt.edu Liam Hodgkinson ICSI and Department of Statistics UC Berkeley liam.hodgkinson@berkeley.edu Michael W. Mahoney ICSI and Department of Statistics UC Berkeley mmahoney@stat.berkeley.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Research code is provided here: https://github.com/erichson/Noisy RNN.
Open Datasets Yes We study the sensitivity of different RNN models with respect to a sequence of perturbed inputs during inference time. We consider different types of perturbations: (a) white noise; (b) multiplicative white noise; (c) salt and pepper; and (d) adversarial perturbations. We train each model with the prescribed tuning parameters for the ordered (see Sec. 7.1) and permuted (see SM) MNIST task. For the Electrocardiogram (ECG) classification task we performed a non-exhaustive hyper-tuning parameter search.
Dataset Splits Yes Next, we consider the Electrocardiogram (ECG) classification task that aims to discriminate between normal and abnormal heart beats of a patient that has severe congestive heart failure [20]. We use 500 sequences of length 140 for training, 500 sequences for validation, and 4000 sequences for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes We train each model with the prescribed tuning parameters for the ordered (see Sec. 7.1) and permuted (see SM) MNIST task. For the Electrocardiogram (ECG) classification task we performed a non-exhaustive hyper-tuning parameter search. In both cases, we set the multiplicative noise level to 0.02, whereas we consider the additive noise levels 0.02 and 0.05. We chose these configurations as they appear to provide a good trade-off between accuracy and robustness. Here, the NRNN, trained with multiplicative noise level set to 0.03 and additive noise levels set to 0.06, performs best both on clean as well as on perturbed input sequences.