Revisiting Robustness in Graph Machine Learning
Authors: Lukas Gosch, Daniel Sturm, Simon Geisler, Stephan Günnemann
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using Contextual Stochastic Block Models (CSBMs) and real-world graphs, our results uncover: i) for a majority of nodes the prevalent perturbation models include a large fraction of perturbed graphs violating the unchanged semantics assumption; ii) surprisingly, all assessed GNNs show over-robustness that is robustness beyond the point of semantic change. |
| Researcher Affiliation | Academia | Lukas Gosch, Daniel Sturm, Simon Geisler, Stephan G unnemann Department of Computer Science & Munich Data Science Institute Technical University of Munich {l.gosch, da.sturm, s.geisler, s.guennemann}@tum.de |
| Pseudocode | No | Not found. The paper describes methods in text but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code, together will all experiment configuration files of all our experiments can be found on the project page: https://www.cs.cit.tum.de/daml/revisiting-robustness/ |
| Open Datasets | Yes | Using Contextual Stochastic Block Models (CSBMs) and real-world graphs... CORA (Sen et al., 2008)... Cora-ML (Bojchevski & G unnemann, 2018), Citeseer (Sen et al., 2008), Pubmed (Sen et al., 2008) and ogbn-arxiv (Hu et al., 2020) are selected. |
| Dataset Splits | Yes | We use an 80%/20% train/validation split on the nodes. ... for all datasets except ogbn-arxiv, 40 nodes per class are randomly selected as validation and test nodes. |
| Hardware Specification | No | Not found. The paper does not specify any hardware details like GPU/CPU models, memory, or specific computing environments used for the experiments. |
| Software Dependencies | No | Not found. The paper mentions software like Adam optimizer but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We train all model for 3000 epochs with a patients of 300 epochs using Adam (Kingma & Ba, 2015) and explore learning rates [0.1, 0.01, 0.001] and weight decay [0.01, 0.001, 0.001] and additionally for MLP: We use a 1 (Hidden)-Layer MLP and test hidden dimensions [32, 64, 128, 256] and dropout [0.0, 0.3, 0.5]. |