Collective Certified Robustness against Graph Injection Attacks
Authors: Yuni Lai, Bailin Pan, Kaihuang Chen, Yancheng Yuan, Kai Zhou
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through comprehensive experiments, we demonstrate that our collective certification scheme significantly improves certification performance with minimal computational overhead. |
| Researcher Affiliation | Academia | 1Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China; 2Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong, China. Correspondence to: Kai Zhou <kaizhou@polyu.edu.hk>, Yancheng Yuan <yancheng.yuan@polyu.edu.hk>, Yuni Lai <csylai@comp.polyu.edu.hk>. |
| Pseudocode | Yes | Algorithm 1 Graph model training (Lai et al., 2023). Algorithm 2 Monte Carlo sampling (Lai et al., 2023). Algorithm 3 Certified robustness via solving optimization problem (10) or (12). |
| Open Source Code | Yes | Our source code is available at https://github.com/Yuni-Lai/Collective LPCert. |
| Open Datasets | Yes | We follow the literature (Schuchardt et al., 2020; Lai et al., 2023) on certified robustness and evaluate our methods on two graph datasets: Cora ML (Bojchevski & G unnemann, 2017) and Citeseer (Sen et al., 2008). |
| Dataset Splits | Yes | We use 50 nodes per class for training and validation respectively, while the remaining as testing nodes. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as CPU or GPU models. |
| Software Dependencies | Yes | All our collective certifying problem is solved using MOSEK (Ap S, 2019) through the CVXPY (Diamond & Boyd, 2016) interface. |
| Experiment Setup | Yes | We employ two representative message-passing GNNs, Graph Convolution Network (GCN) (Kipf & Welling, 2016) and Graph Attention Network (GAT) (Veliˇckovi c et al., 2017), with a hidden layer size of 64 as our base classifiers. We set the degree constraint per injected node as the average degree of existing nodes, which are 6 = 5.68 and 4 = 3.48 respectively on Cora-ML and Citeseer datasets. Grid search is employed to find the suitable smoothing parameters pe and pn from 0.5 to 0.9 respectively. We employ Monte Carlo to estimate the smoothed classifier with a sample size of N = 100, 000. We set the confidence level as α = 0.01. |