Unsupervised Anomaly Detection by Robust Density Estimation
Authors: Boyang Liu, Pang-Ning Tan, Jiayu Zhou4101-4108
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental Evaluation This section presents the empirical studies to validate the effectiveness of our proposed approach. |
| Researcher Affiliation | Academia | Boyang Liu, Pang-Ning Tan, Jiayu Zhou Department of Computer Science and Engineering, Michigan State University liuboya2, ptan, jiayuz@msu.edu |
| Pseudocode | Yes | Algorithm 1: Robust Gradient for Density Estimation Input: corrupted gradient matrix G Rn d, where n = |A|+|N|, anomaly ratio ϵ, current parameter estimate, θ(t) Output: Estimated mean of clean gradient, ˆµ(t) Rd 1. For each row gi in G, calculate its predicted density, ˆpθ(t)(xi) 2. Choose the ϵ-fraction rows in G with smallest ˆpθ(t)(xi) 3. Remove the selected rows from G 4. Return the empirical mean of the remaining rows as ˆµ(t). |
| Open Source Code | No | The paper does not explicitly state that the source code for the described methodology is publicly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Datasets We perform experiments on two benchmark datasets: (1) Stony Brook ODDS library (Rayana 2016), which contains 16 benchmark outlier detection data. (2) CIFAR10, which is an image dataset with high-dimensional features. |
| Dataset Splits | No | The paper specifies training and testing splits (60% training, 40% testing for ODDS; 80% training, 20% testing for CIFAR10), but does not explicitly mention a separate validation split or how validation was performed if used internally. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper mentions using VGG19 and PCA, but does not specify any software names with version numbers for dependencies like deep learning frameworks (e.g., TensorFlow, PyTorch) or programming languages. |
| Experiment Setup | No | The paper states, 'Details of the hyperparameters used are given in the Appendix,' but the provided text does not include the appendix, therefore specific values are not present in the main content. |