GNNCert: Deterministic Certification of Graph Neural Networks against Adversarial Perturbations
Authors: zaishuo xia, Han Yang, Binghui Wang, Jinyuan Jia
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our results on 8 benchmark datasets show GNNCert outperforms three state-of-the-art methods. We conduct comprehensive evaluations on 8 benchmark datasets |
| Researcher Affiliation | Academia | Zaishuo Xia1, Han Yang2, Binghui Wang3, Jinyuan Jia4, 1Renmin University of China, 2Sichuan University, 3Illinois Institute of Technology, 4The Pennsylvania State University |
| Pseudocode | No | The paper describes the GNNCert method in detail across Section 3, including steps for dividing graphs and building the ensemble classifier, but it does not present this information in a structured pseudocode or algorithm block format. |
| Open Source Code | Yes | The code is available at https://github.com/Xia Fire/GNNCERT. |
| Open Datasets | Yes | We use 8 benchmark datasets for graph classification in our evaluations: DBLP (Pan et al., 2013), DD (Dobson & Doig, 2003), ENZYMES (Hu et al., 2020), MUTAG (Debnath et al., 1991), NCI1 (Wale et al., 2008), PROTEINS (Borgwardt et al., 2005), REDDIT-B (Yanardag & Vishwanathan, 2015), COLLAB (Yanardag & Vishwanathan, 2015). Table 2 in the Appendix shows the statistics of those datasets. |
| Dataset Splits | Yes | For each dataset, we randomly sample two-thirds of the graphs as the training dataset to train a base graph classifier and use the remaining graphs as the testing dataset. |
| Hardware Specification | No | The paper states 'Results presented in this paper were obtained using the Chameleon testbed supported by NSF.' but does not provide specific details on the CPU, GPU, or other hardware components used for the experiments. |
| Software Dependencies | No | The paper mentions using GIN (Xu et al., 2019) and the Adam optimizer, and refers to a 'publicly available implementation' for GIN, but it does not specify version numbers for any software libraries or dependencies (e.g., PyTorch, TensorFlow, specific GIN library versions). |
| Experiment Setup | Yes | To train a graph classifier, we use the Adam optimizer with a learning rate of 0.001 and a batch size of 32 for 1,000 epochs. |