Energy-Based Models for Anomaly Detection: A Manifold Diffusion Recovery Approach

Authors: Sangwoong Yoon, Young-Uk Jin, Yung-Kyun Noh, Frank Park

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that MPDR exhibits strong performance across various anomaly detection tasks involving diverse data types, such as images, vectors, and acoustic signals.
Researcher Affiliation Collaboration Sangwoong Yoon Korea Institute for Advanced Study swyoon@kias.re.kr Young-Uk Jin Samsung Electronics yueric.jin@samsung.com Yung-Kyun Noh Hanyang University Korea Institute for Advanced Study nohyung@hanyang.ac.kr Frank C. Park Seoul National University Saige Research fcp@snu.ac.kr
Pseudocode Yes Algorithm 1 Manifold Projection-Diffusion Recovery
Open Source Code Yes The implementation of MPDR is publicly available1. 1https://github.com/swyoon/manifold-projection-diffusion-recovery-pytorch
Open Datasets Yes Models are trained on the training split of MNIST2, excluding the digit designated to be held-out. The training split contains 60,000 images... KMNIST (KMNIST-MNIST) [55] 3... EMNIST (EMNIST-Letters) [56]... For the Omniglot [57] dataset... We use the test split of Fashion MNIST [58]... CIFAR-10 [59] contains 60,000 training images... SVHN [60] is a set of digit images... Texture [61] dataset... Celeb A [62]5 is a dataset... CIFAR-100 [59] contains 60,000 training images... DCASE 2020 Challenge Task 2 dataset [41]...
Dataset Splits Yes Models are trained on the training split of MNIST2, excluding the digit designated to be held-out. The training split contains 60,000 images, and the hold-out procedure reduces the training set to approximately 90%. We evaluate the models on the test split of MNIST, which contains a total of 10,000 images.
Hardware Specification Yes Each run is executed on a single Tesla V100 GPU.
Software Dependencies No The paper mentions 'Py Torch default setting' and 'Adam' for optimization, but it does not specify version numbers for PyTorch, Adam, or any other critical software libraries (e.g., Python version, CUDA version, numpy, scikit-learn, etc.).
Experiment Setup Yes All optimizations are performed using Adam with a learning rate of 0.0001. Each run is executed on a single Tesla V100 GPU. Other details, including network architectures and LMC hyperparameters, can be found in the Appendix. Table 7: Hyperparameters for LMC. Table 8: Convolutional neural network architectures used in experiments.