Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks
Authors: Uday Shankar Shanthamallu, Jayaraman J. Thiagarajan, Andreas Spanias9524-9532
AAAI 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using empirical studies with standard benchmarks and a suite of global and target attacks, we demonstrate the effectiveness of UM-GNN, when compared to existing baselines including the state-of-the-art robust GCN. |
| Researcher Affiliation | Collaboration | 1Arizona State University 2Lawrence Livermore National Labs |
| Pseudocode | Yes | Algorithm We now present the algorithm to train an UM-GNN model given a poisoned graph ˆG = ( ˆA, X). |
| Open Source Code | No | The paper mentions that implementations for some baseline attacks were based on a publicly available library ('Deep Robust (Jin et al. 2020) library'), but it does not provide an explicit statement or link for the source code of their own proposed method (UM-GNN). |
| Open Datasets | Yes | We consider three benchmark citation networks extensively used in similar studies: Cora, Citeseer, and Pubmed (Sen et al. 2008). |
| Dataset Splits | Yes | We follow the typical transductive node classification setup (Kipf and Welling 2017; Veliˇckovi c et al. 2018), while using the standard train, test, and validation splits for our experiments (see Table 1). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | Yes | We implemented all the baselines and the proposed approach using the Pytorch Deep Graph Library (version 0.5.1) (Wang et al. 2019). |
| Experiment Setup | Yes | For all baselines, we set the number of layers (2 layers) and other hyper-parameter settings as specified in their original papers. We set the number of hidden neurons to 16 for both GCN and GAT baselines. In addition, we set the number of attention heads to 8 for GAT. In our implementation of UM-GNN, the GNN model M was designed as a 2 layer GCN similar to the baseline and the surrogate F was a 3 layer FCN with configuration 32 16 K, where K is the total number of classes. In all our experiments, we set λm = 0.3 and λs = 0.001. |