LUNAR: Unifying Local Outlier Detection Methods via Graph Neural Networks

Authors: Adam Goodge, Bryan Hooi, See-Kiong Ng, Wee Siong Ng6737-6745

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments We now conduct experiments with real datasets to answer the following research questions: RQ1 (Accuracy): Does LUNAR outperform existing baselines in detecting true anomalies? RQ2 (Robustness): Is LUNAR more robust to changes in the neighbourhood size, k, than existing local outlier methods? RQ3 (Ablation Study): How do variations in our methodology affected its performance? ... Table 3 shows the AUC score (multiplied by 100) of each method for each dataset.
Researcher Affiliation Academia Adam Goodge1,3, Bryan Hooi 1,2, See-Kiong Ng 1,2, Wee Siong Ng 3 1 School of Computing, National University of Singapore 2 Institute of Data Science, National University of Singapore 3 Institute for Infocomm Research, A*STAR, Singapore
Pseudocode No The paper describes algorithmic steps and functions but does not present them within a structured pseudocode or algorithm block.
Open Source Code Yes 1Code available at https://github.com/agoodge/LUNAR
Open Datasets Yes Datasets Each dataset used in our experiments is publicly available and consists of a normal (0) class and anomaly (1) class. Table 2 summarises them and their key statistics.
Dataset Splits Yes Of the remaining normal samples, they are split 85:15 into a training set and validation set.
Hardware Specification Yes It was implemented using Py Torch Geometric on Windows OS and a Nvidia Ge Force RTX 2080 Ti GPU.
Software Dependencies No The paper mentions 'Py Torch Geometric' and 'Adam' but does not specify their version numbers.
Experiment Setup Yes The neural network, F in (14), consists of four fully connected hidden layers all of size 256. All layers used tanh activation except for the sigmoid function at the output layer. We used mean squared error as the loss function and Adam (Kingma and Ba 2014) for optimization with a learning rate of 0.001 and weight decay of 0.1. We trained the model for 200 epochs and used the model parameters with the best validation score as the final model.