Rethinking Reconstruction-based Graph-Level Anomaly Detection: Limitations and a Simple Remedy
Authors: Sunwoo Kim, Soo Yong Lee, Fanchen Bu, Shinhwan Kang, Kyungho Kim, Jaemin Yoo, Kijung Shin
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments (Section 6): Our experiments on 10 datasets demonstrate the superiority of MUSE over prior GLAD methods. |
| Researcher Affiliation | Academia | Kim Jaechul Graduate School of AI, 2School of Electrical Engineering Korea Advanced Institute of Science and Technology (KAIST) {kswoo97, syleetolow, boqvezen97, shinhwan.kang, kkyungho, jaemin, kijungs}@kaist.ac.kr |
| Pseudocode | No | The paper describes the proposed method in text but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Our code and datasets are available at https://github. com/kswoo97/GLAD_MUSE. |
| Open Datasets | Yes | Following existing GLAD studies [39, 32, 33, 55, 34, 38], we use graph classification benchmark datasets for evaluation. Specifically, we use 10 datasets from diverse domains, such as chemical molecules, bioinformatics, and social networks. Detailed descriptions of the datasets are in Appendix B. Our code and datasets are available at https://github. com/kswoo97/GLAD_MUSE. |
| Dataset Splits | Yes | For each configuration, the normal graphs are split into training, validation, and test sets in an 80%/10%/10% ratio. Additionally, 5% of anomalies are sampled for the validation set (only for hyperparameter tuning) and 5% for the test set. |
| Hardware Specification | Yes | All experiments of this work are performed on a machine with NVIDIA RTX 8000 D6 GPUs (48GB VRAM) and two Intel Xeon Silver 4214R processors. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer [21]' and 'GIN [49]' but does not provide specific version numbers for programming languages or libraries used. |
| Experiment Setup | Yes | For all the methods, we fix the dropout probability and weight decay as 0.3 and 1e-6, respectively. In addition, all the methods are trained with Adam optimizer [21]. ... Training learning rate γ {10-3, 10-4} Models hidden dimension d {16, 32, 64, 128, 256} Number of GNN layers K {3, 4, 5} Number of model training epochs L {30, 60, 300} ... MLP hidden dimension d {32, 64, 128} MLP learning rate γ {10-2, 10-3, 10-4}. We fix the MLP auteoncoder training epochs and the number of MLP layers as 500 and 3, respectively. |