Latent Regularized Generative Dual Adversarial Network For Abnormal Detection

Authors: Chengwei Chen, Jing Liu, Yuan Xie, Yin Xiao Ban, Chunyun Wu, Yiqing Tao, Haichuan Song

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our model has the clear superiority over cutting edge semi-supervised abnormal detectors and achieves the state-of-the-art results on the datasets.
Researcher Affiliation Academia Chengwei Chen , Jing Liu , Yuan Xie , Yin Xiao Ban , Chunyun Wu , Yiqing Tao and Haichuan Song East China Normal University {52184501028, 51174500035}@stu.ecnu.edu.cn, yxie@cs.ecnu.edu.cn, {51194501102, 51184501161,10161900112}@stu.ecnu.edu.cn, hcsong@cs.ecnu.edu.cn
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link for open-source code availability.
Open Datasets Yes We evaluate proposed method on the well-known COIL100, MNIST, CIFAR10 and f MNIST datasets in out-of-distribution samples detection experiments. In addition, DCASE dataset is considered in the experiment, which is a public available acoustic novelty detection dataset. To evaluate adversarial attacks detection task, we use the GTSRB dataset [Stallkamp et al., 2011]
Dataset Splits Yes For all experiments, we train on the standard training set and test on the validation set.
Hardware Specification Yes The experiments are carried out on a standard PC with a NVIDIA-1080 GPU and a multi-core 2.1 GHz CPU.
Software Dependencies No We implement our approach in Py Torch by optimizing the weighted loss (defined in equation (1)) with the weight values wi = 1, wa = 5, wz = 1, we = 0.05 and wd = 1, which are empirically chosen to yield optimum results. The paper mentions "Py Torch" but does not specify a version number.
Experiment Setup Yes We use adaptive moment estimation(Adam) as the optimizer and set the initialized learning rate as 0.002. For all experiments, we train on the standard training set and test on the validation set. Besides, data augmentation (random cropping and horizontal flipping) and normalization (subtracted and divided sequentially by mean and standard deviation of the training images) is applied to all the training images. The detailed structures of the autoencoder, the auxiliary autoencoder, mutual information estimator and discriminator are described in Figure 2. For dual autoencoder loss, the parameter k is set to 0.4.