RaPP: Novelty Detection with Reconstruction along Projection Pathway
Authors: Ki Hyun Kim, Sangwoo Shim, Yongsub Lim, Jongseob Jeon, Jeongwoo Choi, Byungchan Kim, Andre S. Yoon
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments using diverse datasets, we validate that RAPP improves novelty detection performances of autoencoder-based approaches. Besides, we show that RAPP outperforms recent novelty detection methods evaluated on popular benchmarks. |
| Researcher Affiliation | Industry | Ki Hyun Kim, Sangwoo Shim, Yongsub Lim, Jongseob Jeon, Jeongwoo Choi, Byungchan Kim, Andre S. Yoon Makina Rocks {khkim, sangwoo, yongsub, jongseob.jeon}@makinarocks.ai {jeongwoo, kbc8894, andre}@makinarocks.ai |
| Pseudocode | Yes | Algorithm 1: RAPP to compute a novelty score. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the described methodology or a direct link to a code repository. |
| Open Datasets | Yes | The datasets from Kaggle and the UCI repository are chosen from problem sets of anomaly detection and multi-class classification, summarized in Table 1. (from Section 5.1). References include direct URLs for datasets such as: EOPT. https://www.kaggle.com/init-owl/high-storage-system-data-for-energy-optimization., F-MNIST. https://github.com/zalandoresearch/fashion-mnist., MI. https://www.kaggle.com/shasun/tool-wear-detection-in-cnc-mill., MNIST. http://yann.lecun.com/exdb/mnist/., NASA. https://www.kaggle.com/shrutimehta/nasa-asteroids-classification., OTTO. https://www.kaggle.com/c/otto-group-product-classification-challenge., SNSR. https://archive.ics.uci.edu/ml/datasets/dataset+for+sensorless+drive+diagnosis., STL. https://www.kaggle.com/uciml/faulty-steel-plates. |
| Dataset Splits | Yes | training sets contain only normal samples and test sets contain both normal and anomaly samples in our evaluation setups. (Section 5.1). We train AE, VAE and AAE with Adam optimizer (Kingma & Ba, 2015), and select the model with the lowest validation loss as the best model. (Section 5.3). |
| Hardware Specification | No | Appendix A mentions "Torch SVD utilizing GPU" and "fbpca running only on CPU", but it does not specify any particular GPU or CPU models (e.g., NVIDIA A100, Intel Xeon E5). |
| Software Dependencies | No | The paper mentions concepts like "Leaky-Re LU" activation, "batch normalization", and "Adam optimizer", and tools like "Pytorch SVD" and "fbpca", but it does not provide specific version numbers for any software dependencies (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | Yes | We use symmetric architecture with fully-connected layers for the three base models, AE, VAE, and AAE. Each encoder and decoder has 10 layers with different bottleneck size. (Section 5.3). We train AE, VAE and AAE with Adam optimizer (Kingma & Ba, 2015), and select the model with the lowest validation loss as the best model. For training stability of VAE, 10 Monte Carlo samples were averaged in the reparamterization trick (Kingma & Welling, 2014) to obtain reconstruction from the decoder. (Section 5.3). |