Deep One-Class Classification
Authors: Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, Marius Kloft
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show the effectiveness of our method on MNIST and CIFAR-10 image benchmark datasets as well as on the detection of adversarial examples of GTSRB stop signs. |
| Researcher Affiliation | Collaboration | 1Hasso Plattner Institute, Potsdam, Germany 2Department of Computer Science, TU Kaiserslautern, Kaiserslautern, Germany 3Machine Learning Group, Department of Electrical Engineering & Computer Science, TU Berlin, Berlin, Germany 4School of Informatics, University of Edinburgh, Edinburgh, Scotland 5German Research Center for Artificial Intelligence (DFKI GmbH), Kaiserslautern, Germany 6ISTD pillar, Singapore University of Technology and Design, Singapore. |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | We provide our code at https://github.com/lukasruff/Deep SVDD. |
| Open Datasets | Yes | We evaluate Deep SVDD on the well-known MNIST (Le Cun et al., 2010) and CIFAR-10 (Krizhevsky & Hinton, 2009) datasets. ... German Traffic Sign Recognition Benchmark (GTSRB) dataset (Stallkamp et al., 2011). |
| Dataset Splits | No | The paper mentions 'We use the original training and test splits in our experiments' for MNIST and CIFAR-10, and specifies training and test set sizes for GTSRB, but does not explicitly detail a separate validation split for the main Deep SVDD models. A 'small holdout set' is mentioned only for hyperparameter tuning of shallow baselines, not for the primary deep models. |
| Hardware Specification | No | The paper mentions general statements like 'by processing on multiple GPUs' but does not specify any particular GPU models, CPU models, or detailed hardware configurations used for the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer (Kingma & Ba, 2014)' and 'Batch Normalization (Ioffe & Szegedy, 2015)' but does not provide specific version numbers for software libraries, frameworks, or programming languages (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We employ a simple two-phase learning rate schedule (searching + fine-tuning) with initial learning rate η = 10 4, and subsequently η = 10 5. For DCAE we train 250 + 100 epochs, for Deep SVDD 150 + 100. Leaky ReLU activations are used with leakiness α = 0.1. On MNIST, we use a CNN with two modules... and a final dense layer of 32 units. On CIFAR-10, we use a CNN with three modules... followed by a final dense layer of 128 units. We use a batch size of 200 and set the weight decay hyperparameter to λ = 10 6. |