RCA: A Deep Collaborative Autoencoder Approach for Anomaly Detection

Authors: Boyang Liu, Ding Wang, Kaixiang Lin, Pang-Ning Tan, Jiayu Zhou

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We performed extensive experiments to compare the performance of RCA against various baseline methods. The code is available at https://github.com/illidanlab/RCA. For evaluation, we used 18 benchmark datasets obtained from the Stony Brook ODDS library [Rayana, 2016]. We reserve 60% of the data for training and the remaining 40% for testing. The performance of the competing methods are evaluated based on their Area under ROC curve (AUC) scores.
Researcher Affiliation Academia Boyang Liu1 and Ding Wang1 and Kaixiang Lin1 and Pang-Ning Tan1 and Jiayu Zhou1 1Michigan State University, Department of Computer Science and Engineering {liuboya2, wangdin1, linkaixi, ptan, jiayuz}@msu.edu
Pseudocode Yes A pseudocode for RCA with k = 2 autoencoders is shown in Algorithm 1. Algorithm 1: Robust Collaborative Autoencoders
Open Source Code Yes The code is available at https://github.com/illidanlab/RCA.
Open Datasets Yes For evaluation, we used 18 benchmark datasets obtained from the Stony Brook ODDS library [Rayana, 2016]1. 1Additional experimental results on the CIFAR10 dataset are given in the longer version of the paper. [Rayana, 2016] Shebuti Rayana. ODDS library. http://odds. cs.stonybrook.edu, 2016. Accessed: 2020-09-01.
Dataset Splits No We reserve 60% of the data for training and the remaining 40% for testing. The paper does not explicitly mention a validation split percentage or size; it only specifies training and testing splits.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or memory specifications).
Software Dependencies No The paper does not provide specific version numbers for software dependencies used in the experiments (e.g., Python, PyTorch, TensorFlow, or specific library versions).
Experiment Setup Yes Algorithm 1: Robust Collaborative Autoencoders input: training data Xtrn, test data Xtst, anomaly ratio ϵ, dropout rate r, decay rate α, and max epoch for training; To ensure fair comparison, we maintain similar hyperparameter settings for all the competing DNN-based approaches. More discussion about our experimental setting will be given in the long version of the paper.