Cross-Domain Graph Anomaly Detection via Anomaly-Aware Contrastive Alignment
Authors: Qizhou Wang, Guansong Pang, Mahsa Salehi, Wray Buntine, Christopher Leckie
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on eight CD-GAD settings demonstrate that our approach ACT achieves substantially improved detection performance over 10 state-of-the-art GAD methods. |
| Researcher Affiliation | Academia | 1 Monash University 2 Singapore Management University 3 Vin University 4 The University of Melbourne |
| Pseudocode | No | The paper describes its method in text and equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/QZ-WANG/ACT. |
| Open Datasets | Yes | Eight CD-GAD settings based on four real-world GAD datasets, including Yelp Hotel (HTL), Yelp Res (RES), Yelp NYC (NYC) and Amazon (AMZ)1, are created... 1Statistics of each dataset are given in Suppl. Material |
| Dataset Splits | No | The paper describes the datasets used (Yelp Hotel, Yelp Res, Yelp NYC, Amazon) and states '1Statistics of each dataset are given in Suppl. Material' but does not provide specific train/validation/test split percentages or sample counts in the main text. |
| Hardware Specification | No | The paper mentions running experiments on the 'MASSIVE HPC facility', but does not provide specific details such as GPU or CPU models, or other hardware specifications. |
| Software Dependencies | No | The paper mentions the use of 'Graph SAGE' and 'ADAM optimiser' but does not specify version numbers for any software dependencies. |
| Experiment Setup | Yes | Our model ACT is implemented with a three-layer Graph SAGE ... 256 and 64 hidden dimensions are chosen for ψs and ψt respectively. The source model is trained for 50 epochs using a learning rate of 10 3. The domain alignment is performed for 50 epochs using a learning rate of 10 4. ... The optimisation is done in mini-batches of 128 target (centre) nodes using the ADAM optimiser... We use the sample size of 25 and 10 for the two hidden layers during message passing. In self labelling, α = 2.5 and q = 25 are used by default. |