Deep Anomaly Detection Using Geometric Transformations

Authors: Izhak Golan, Ran El-Yaniv

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present extensive experiments using the proposed detector, which indicate that our technique consistently improves all known algorithms by a wide margin.
Researcher Affiliation Academia Izhak Golan Department of Computer Science Technion Israel Institute of Technology Haifa, Israel izikgo@cs.technion.ac.il Ran El-Yaniv Department of Computer Science Technion Israel Institute of Technology Haifa, Israel rani@cs.technion.ac.il
Pseudocode No The paper mentions, 'A full and detailed algorithm is available in the supplementary material,' but does not include pseudocode or algorithm blocks in the main text of the paper.
Open Source Code Yes A complete code of the proposed method s implementation and the conducted experiments is available at https://github.com/izikgo/AnomalyDetectionTransformations.
Open Datasets Yes We consider four image datasets in our experiments: CIFAR-10, CIFAR-100 [21], Cats Vs Dogs [11], and fashion-MNIST [38], which are described below.
Dataset Splits Yes CIFAR-10: There are 50,000 training images and 10,000 test images, divided equally across the classes. CIFAR-100: This set has a fixed train/test partition with 500 training images and 100 test images per class. Fashion-MNIST: The training set has 60,000 images and the test set has 10,000 images. Cats Vs Dogs: We split this dataset into a training set containing 10,000 images, and a test set of 2,500 images in each class.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications. It only mentions model parameters for depth and width which are architectural choices, not hardware.
Software Dependencies No The paper mentions using a 'Wide Residual Network (WRN) model' and the 'Adam optimizer' but does not specify any software versions (e.g., Python, PyTorch, TensorFlow, or library versions).
Experiment Setup Yes The parameters for the depth and width of the model for all 32x32 datasets were chosen to be 10 and 4, respectively, and for the Cats Vs Dogs dataset (64x64), 16 and 8, respectively. These hyperparameters were selected prior to conducting any experiment, and were fixed for all runs. We used the Adam [20] optimizer with default hyperparameters. Batch size for all methods was set to 128. The number of epochs was set to 200 on all benchmark models...