GAN Ensemble for Anomaly Detection

Authors: Xu Han, Xiaohui Chen, Li-Ping Liu4090-4097

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The empirical study constructs ensembles based on four different types of detecting models, and the results show that the ensemble outperforms the single model for all four model types. In the empirical study, we test the proposed ensemble method on both synthetic and real datasets. The results indicate that ensembles significantly improve the performance over single detection models. The empirical analysis of vector representations verifies our theoretical analysis.
Researcher Affiliation Academia Xu Han*, Xiaohui Chen*, Li-Ping Liu Department of Computer Science, Tufts University {Xu.Han, Xiaohui.Chen, Liping.Liu}@tufts.edu
Pseudocode Yes Algorithm 1 GAN ensemble for anomaly detection Input: Training set X = {xi}N i=1 Output: Trained generators {(Ge( ; φi), Gd( ; ψi)}I i=1 and discriminators {D( ; γj)}J j=1
Open Source Code Yes The implementation is available at https://github. com/tufts-ml/GAN-Ensemble-for-Anomaly-Detection.
Open Datasets Yes We evaluate our method against baseline methods on four datasets. KDD99 (Dua and Graff 2019) is a dataset for anomaly detection. OCT (Kermany et al. 2018) has three classes with small number of samples, and these three classes are treated as abnormal classes. MNIST (Le Cun and Cortes 1998) and CIFAR-10 (Krizhevsky, Hinton et al. 2009) are datasets for multiclass classification.
Dataset Splits No The paper mentions training and testing on datasets like MNIST and CIFAR-10, which have standard splits, but it does not explicitly specify the training/validation/test percentages or sample counts for reproduction.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the implementation (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup Yes All the experiments are conducted using three I = 3 generators and three J = 3 discriminators. The relative weight β is usually empirically selected. As β increases, the discriminative loss contributes larger fractions to anomaly scores, and the detection performance improves.