Robust Counterfactual Explanations on Graph Neural Networks

Authors: Mohit Bajaj, Lingyang Chu, Zi Yu Xue, Jian Pei, Lanjun Wang, Peter Cho-Ho Lam, Yong Zhang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Exhaustive experiments on many public datasets demonstrate the superior performance of our method. Last, we conduct comprehensive experimental study to compare our method with the state-of-the-art methods on fidelity, robustness, accuracy and efficiency. All the results solidly demonstrate the superior performance of our approach.
Researcher Affiliation Collaboration 1Huawei Technologies Canada Co., Ltd. 2Mc Master University 3 The University of British Columbia 4Simon Fraser University
Pseudocode No No structured pseudocode or algorithm blocks are provided in the paper.
Open Source Code Yes The code3 is publicly available. Code available at https://marketplace.huaweicloud.com/markets/aihub/notebook/detail/?id=e41f63d3-e346-4891-bf6a-40e64b4a3278
Open Datasets Yes For the graph classification task, we use one synthetic dataset, BA-2motifs [25], and two real-world datasets, Mutagenicity [21] and NCI1 [39].
Dataset Splits No The paper states, 'Please refer to Appendix E for details on datasets, baselines and the experiment setups,' indicating that specific dataset split information is not in the main text.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments are provided in the paper.
Software Dependencies No No specific software versions or library dependencies are provided for the ancillary software used in the experiments.
Experiment Setup No The paper mentions that hyperparameters and experimental setups are discussed in Appendices E and G, implying these details are not present in the main body of the paper.