Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Certifying Robust Graph Classification under Orthogonal Gromov-Wasserstein Threats
Authors: Hongwei Jin, Zishun Yu, Xinhua Zhang
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments in Section 5 verify the effectiveness of our attacker and certificate, in that a large proportion of the graphs can be proved either vulnerable or robust. |
| Researcher Affiliation | Academia | Hongwei Jin Mathematics and Computer Science Division Argonne National Laboratory Lemont, IL 60439 EMAIL Zishun Yu, Xinhua Zhang Department of Computer Science University of Illinois Chicago Chicago, IL 60607 EMAIL |
| Pseudocode | Yes | Algorithm 1: Attacking with binary search; Algorithm 2: Compute Xλ for a given λ |
| Open Source Code | Yes | The code is available at [55]. [55] Online Supplementary. Supplementary material including code. https://github.com/cshjin/cert_ogw. |
| Open Datasets | Yes | Datasets. We experimented on four graph datasets whose statistics are given in Table 1 [56]. [56] TUDataset. Tudataset. https://chrsmrrs.github.io/datasets/docs/home/. |
| Dataset Splits | Yes | We split the dataset into 75% / 25% for training / testing, and tuned the γ and regularizer weight in SVM by 5-fold cross validation on the training set. Following [7], we split each dataset into 30%, 20%, and 50% for training, validation, and testing, respectively. |
| Hardware Specification | No | The paper states that it included compute and resource type (checklist 3d), but no specific hardware details (e.g., GPU/CPU models, memory, cloud instance types) are provided within the main text of the paper. |
| Software Dependencies | No | The paper mentions general software or tools (e.g., SVM, GCN, Python implicitly via the GitHub link) but does not provide specific version numbers for any libraries, frameworks, or solvers used in the experiments. |
| Experiment Setup | Yes | A GCN model was then learned using a single linear convolutional layer with 64 hidden nodes, followed by average pooling. ...the GCN is trained with a hinge loss that promotes large margin from (1) for robustness: Pc=y max{0, 1 + max A {Gc(A) Gy(A)}}, where A is optimized under the budget of δl = 1 and δg = 10. We split the dataset into 75% / 25% for training / testing, and tuned the γ and regularizer weight in SVM by 5-fold cross validation on the training set. |