Certifying Robust Graph Classification under Orthogonal Gromov-Wasserstein Threats

Authors: Hongwei Jin, Zishun Yu, Xinhua Zhang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments in Section 5 verify the effectiveness of our attacker and certificate, in that a large proportion of the graphs can be proved either vulnerable or robust.
Researcher Affiliation Academia Hongwei Jin Mathematics and Computer Science Division Argonne National Laboratory Lemont, IL 60439 jinh@anl.gov Zishun Yu, Xinhua Zhang Department of Computer Science University of Illinois Chicago Chicago, IL 60607 {zyu32,zhangx}@uic.edu
Pseudocode Yes Algorithm 1: Attacking with binary search; Algorithm 2: Compute Xλ for a given λ
Open Source Code Yes The code is available at [55]. [55] Online Supplementary. Supplementary material including code. https://github.com/cshjin/cert_ogw.
Open Datasets Yes Datasets. We experimented on four graph datasets whose statistics are given in Table 1 [56]. [56] TUDataset. Tudataset. https://chrsmrrs.github.io/datasets/docs/home/.
Dataset Splits Yes We split the dataset into 75% / 25% for training / testing, and tuned the γ and regularizer weight in SVM by 5-fold cross validation on the training set. Following [7], we split each dataset into 30%, 20%, and 50% for training, validation, and testing, respectively.
Hardware Specification No The paper states that it included compute and resource type (checklist 3d), but no specific hardware details (e.g., GPU/CPU models, memory, cloud instance types) are provided within the main text of the paper.
Software Dependencies No The paper mentions general software or tools (e.g., SVM, GCN, Python implicitly via the GitHub link) but does not provide specific version numbers for any libraries, frameworks, or solvers used in the experiments.
Experiment Setup Yes A GCN model was then learned using a single linear convolutional layer with 64 hidden nodes, followed by average pooling. ...the GCN is trained with a hinge loss that promotes large margin from (1) for robustness: Pc=y max{0, 1 + max A {Gc(A) Gy(A)}}, where A is optimized under the budget of δl = 1 and δg = 10. We split the dataset into 75% / 25% for training / testing, and tuned the γ and regularizer weight in SVM by 5-fold cross validation on the training set.