Evaluating Attribution for Graph Neural Networks

Authors: Benjamin Sanchez-Lengeling, Jennifer Wei, Brian Lee, Emily Reif, Peter Wang, Wesley Qian, Kevin McCloskey, Lucy Colwell , Alexander Wiltschko

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Graph-valued data offer an opportunity to quantitatively benchmark attribution methods, because challenging synthetic graph problems have computable ground-truth attributions. In this work we adapt commonly-used attribution methods for GNNs and quantitatively evaluate them using the axes of attribution accuracy, stability, faithfulness and consistency. We make concrete recommendations for which attribution methods to use, and provide the data and code for our benchmarking suite.
Researcher Affiliation Collaboration 1Google Research 2Stanford University, work done while a resident at X. 3University of Illinois at Urbana-Champaign 4University of Cambridge 5Email: {bmsanchez, alexbw}@google.com
Pseudocode No The paper describes attribution methods using mathematical formulas but does not include any pseudocode or algorithm blocks.
Open Source Code Yes 1Code and data for this paper will be available at github.com/google-research/graph-attribution
Open Datasets Yes We use the dataset constructed by Mc Closkey et al. [29].
Dataset Splits No 1,200 graphs are selected for each logic combination, and 10% of these graphs are reserved for the test set.
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments.
Software Dependencies No The paper mentions software like TensorFlow and RDKit but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes The variance of the noise (σ=0.15), and number of samples (n=100) is optimized for attribution AUROC on the Benzene task.