Dense Projection for Anomaly Detection
Authors: Dazhi Fu, Zhao Zhang, Jicong Fan
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments show that our method DPAD is effective not only in traditional one-class classification problems but also in scenarios with complex normal data composed of multiple classes. |
| Researcher Affiliation | Academia | 1 The Chinese University of Hong Kong, Shenzhen, China 2 University of Electronic Science and Technology of China, Chengdu, China 3 Hefei University of Technology, Hefei, China 4 Shenzhen Research Institute of Big Data, Shenzhen, China |
| Pseudocode | Yes | Algorithm 1: Training and testing processes of DPAD |
| Open Source Code | No | The paper mentions running "officially released code" for other methods but does not provide a link or explicit statement about releasing its own source code for DPAD. |
| Open Datasets | Yes | We choose CIFAR-10 (Krizhevsky, Hinton et al. 2009) and Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017) as our image datasets, Arrhythmia (Rayana 2016), Abalone (Dua, Graff et al. 2017), Campaign (Han et al. 2022), and MAGIC Gama (Han et al. 2022) as our tabular datasets to test the proposed method DPAD. |
| Dataset Splits | No | The paper describes how classes are selected for normal/anomalous data for training and testing (e.g., "choosing one of the 10 classes as the normal class"), and mentions "the testing samples remained the same as before" indicating a test set. However, it does not provide explicit numerical percentages, counts, or specific predefined dataset splits (e.g., 80/10/10) for training, validation, and testing sets. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware used for the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions using Le-Net-based CNN and refers to various methods/models but does not list any specific software dependencies (e.g., Python, PyTorch, TensorFlow) with their version numbers. |
| Experiment Setup | Yes | In the training stage, to ensure that the distance between any two representations of training data is fully considered and optimized, we refrain from using mini-batch... Moreover, the setting of hyperparameter γ controls the initialization of weights for e W ij... we set γ to a relatively small numerical value. The optimization details are presented in Algorithm 1. We run the proposed methods 5 times with 100 epochs optimization to get the final average result. |