Robust Subspace Recovery Layer for Unsupervised Anomaly Detection

Authors: Chieh-Hsin Lai, Dongmian Zou, Gilad Lerman

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive numerical experiments with both image and document datasets demonstrate state-of-the-art precision and recall. and 4 EXPERIMENTAL RESULTS We test our method 1on five datasets: Caltech 101 (Fei-Fei et al., 2007), Fashion-MNIST (Xiao et al., 2017), Tiny Imagenet (a small subset of Imagenet (Russakovsky et al., 2015)), Reuters-21578 (Lewis, 1997) and 20 Newsgroups (Lang, 1995).
Researcher Affiliation Academia Chieh-Hsin Lai , Dongmian Zou & Gilad Lerman School of Mathematics University of Minnesota Minneapolis, MN 55455 {laixx313, dzou, lerman}@umn.edu
Pseudocode Yes Algorithm 1 RSRAE and Algorithm 2 RSRAE+ in Appendix A.
Open Source Code Yes Our implementation is available at https://github.com/dmzou/RSRAE.git (footnote 1 in Section 4).
Open Datasets Yes We test our method 1on five datasets: Caltech 101 (Fei-Fei et al., 2007), Fashion-MNIST (Xiao et al., 2017), Tiny Imagenet (a small subset of Imagenet (Russakovsky et al., 2015)), Reuters-21578 (Lewis, 1997) and 20 Newsgroups (Lang, 1995).
Dataset Splits No The paper describes how inliers and outliers are sampled for experiments, but does not specify a separate validation dataset split for model training or hyperparameter tuning. For example, for Fashion-MNIST, it states: 'We use the test set which contains 10,000 images and normalize pixel values to lie in [-1, 1]. In each experiment, we fix a class and the inliers are the test images in this class.'
Hardware Specification Yes All experiments were executed on a Linux machine with 64GB RAM and four GTX1080Ti GPUs.
Software Dependencies No For all experiments with neural networks, we used Tensor Flow and Keras. The LOF, OCSVM and IF methods are adapted from the scikit-learn packages. No version numbers are provided for these software components.
Experiment Setup Yes We describe the structure of the RSRAE as follows. For the image datasets without deep features, the encoder consists of three convolutional layers: 5 5 kernels with 32 output channels, strides 2; ... For each experiment, the RSRAE model is optimized with Adam using a learning rate of 0.00025 and 200 epochs. The batch size is 128 for each gradient step.