Deep Semi-Supervised Anomaly Detection
Authors: Lukas Ruff, Robert A. Vandermeulen, Nico Görnitz, Alexander Binder, Emmanuel Müller, Klaus-Robert Müller, Marius Kloft
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10, along with other anomaly detection benchmark datasets, we demonstrate that our method is on par or outperforms shallow, hybrid, and deep competitors, yielding appreciable performance improvements even when provided with only little labeled data. |
| Researcher Affiliation | Collaboration | Lukas Ruff1 Robert A. Vandermeulen1 Nico Görnitz 1 2 Alexander Binder3 Emmanuel Müller4 Klaus-Robert Müller1 5 6 Marius Kloft7 1Technical University of Berlin, Germany 2123ai.de, Berlin, Germany 3Singapore University of Technology & Design, Singapore 4Bonn-Aachen International Center for Information Technology, Germany 5Korea University, Seoul, Republic of Korea 6Max Planck Institute for Informatics, Saarbrücken, Germany 7Technical University of Kaiserslautern, Germany |
| Pseudocode | Yes | Algorithm 1 Optimization of Deep SAD |
| Open Source Code | Yes | Our code is available at: https://github.com/lukasruff/Deep-SAD-Py Torch |
| Open Datasets | Yes | We evaluate Deep SAD on MNIST, Fashion-MNIST, and CIFAR-10 as well as on classic AD benchmark datasets. |
| Dataset Splits | Yes | In our experiments we deliberately grant the shallow and hybrid methods an unfair advantage by selecting their hyperparameters to maximize AUC on a subset (10%) of the test set to minimize hyperparameter selection issues. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, or cloud instances) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | The paper mentions using Adam optimizer and Batch Normalization, and the code link suggests PyTorch, but no specific version numbers for any software dependencies are provided in the text. |
| Experiment Setup | Yes | For all deep approaches and on all datasets, we employ a two-phase ( searching and fine-tuning ) learning rate schedule. In the searching phase we first train with a learning rate ε = 10 4 for 50 epochs. In the fine-tuning phase we train with ε = 10 5 for another 100 epochs. We always use a batch size of 200. We set λ = 10 6 and equally weight the unlabeled and labeled examples by setting η = 1 if not reported otherwise. |