Neural Transformation Learning for Deep Anomaly Detection Beyond Images
Authors: Chen Qiu, Timo Pfrommer, Marius Kloft, Stephan Mandt, Maja Rudolph
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on time series show that our proposed method outperforms existing approaches in the one-vs.-rest setting and is compet itive in the more challenging n-vs-rest anomalydetection task. On medical and cyber-security tabular data, our method learns domain-specific transformations and detects anomalies more accu rately than previous work. |
| Researcher Affiliation | Collaboration | Chen Qiu 1 2 Timo Pfrommer 1 Marius Kloft 2 Stephan Mandt 3 Maja Rudolph 1 1Bosch Center for AI 2TU Kaiserslautern 3UC Irvine. Corre spondence to: Maja Rudolph <majarita.rudolph@de.bosch.com>. |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide a statement or link indicating that its source code is openly available for the described methodology. |
| Open Datasets | Yes | The datasets come from the UEA multivariate time series classification archive2 (Bagnall et al., 2018). 2from which we selected datasets on which supervised multiclass classification methods achieve strong results (Ruiz et al., 2020). |
| Dataset Splits | Yes | We use 10% of the test set as the validation set to allow parameterization selection. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments (e.g., specific GPU/CPU models, memory, or cloud instance types). |
| Software Dependencies | No | The paper describes the architecture and components of Neu Tra L AD and the baselines, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The masks Mk are each a stack of three residual blocks of 1d convolutional layers with instance normalization layers and Re LU activations, as well as one 1d convolutional layer on the top. For the multiplicative parameterization, a sigmoid activation is applied to the masks. All bias terms are fixed as zero, and the learnable affine parameters of the instance normalization layers are frozen. The same encoder architecture is used for Neu Tra L AD, Deep SVDD, DROCC, and with slight modification to achieve the appropriate number of outputs for DAGMM and transformation prediction with fixed Ts. The encoder is a stack of residual blocks of 1d convolutional layers. The number of blocks depends on the dimensionality of the data and is detailed in Appendix B. The encoder has output dimension 64 for all experiments. |