Latent Outlier Exposure for Anomaly Detection with Contaminated Data

Authors: Chen Qiu, Aodong Li, Marius Kloft, Maja Rudolph, Stephan Mandt

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments with several backbone models on three image datasets, 30 tabular data sets, and a video anomaly detection benchmark showed consistent and significant improvements over the baselines.
Researcher Affiliation Collaboration 1Bosch Center for Artificial Intelligence 2TU Kaiserslautern, Germany 3UC Irvine, USA.
Pseudocode Yes Algorithm 1 Training process of LOE
Open Source Code Yes 1Code is available at https://github.com/ boschresearch/Latent OE-AD.git
Open Datasets Yes We experiment with three image datasets: CIFAR-10, Fashion-MNIST, and MVTEC (Bergmann et al., 2019)... We study all 30 tabular datasets used in the empirical analysis of a recent state-of-theart paper (Shenkar & Wolf, 2022)... We study UCSD Peds17, a popular benchmark for video anomaly detection.
Dataset Splits Yes On CIFAR-10 and F-MNIST, we follow the standard one-vs.-rest protocol... We follow the pre-processing and train-test split of the datasets in Shenkar & Wolf (2022).
Hardware Specification No The paper mentions that experiments were performed on "GPU clusters" in the acknowledgements, but does not provide specific details such as GPU models, CPU models, or memory specifications.
Software Dependencies No The paper mentions using "Adam (Kingma & Ba, 2014) stochastic optimizer" and various backbone models like "Res Net" but does not specify version numbers for these or other software libraries/frameworks.
Experiment Setup Yes During training, we used Adam (Kingma & Ba, 2014) stochastic optimizer and set the mini-batch size to be 25. The learning rate is 0.01, and we trained the model for 200 epochs... On CIFAR-10, we set minibatch size to be 500, learning rate to be 4e-4, 30 training epochs with Adam optimizer... On MVTEC, we set minibatch size to be 40, learning rate to be 2e-4, 30 training epochs with Adam optimizer.