DAG Matters! GFlowNets Enhanced Explainer for Graph Neural Networks

Authors: Wenqian Li, Yinchuan Li, Zhigang Li, Jianye HAO, Yan Pang

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on both synthetic and real datasets, and both qualitative and quantitative results show the superiority of our GFlow Explainer.
Researcher Affiliation Collaboration 1National University of Singapore, Singapore 2Huawei Noah s Ark Lab, Beijing, China 3Tianjin University, Tianjin, China
Pseudocode Yes We show the pseudocode of our GFlow Explainer for node classification and graph classification in Algorithm 1 and Algorithm 2 respectively.
Open Source Code No The paper lists links for baseline methods (e.g., GNNExplainer, PGExplainer, DEGREE, RGExplainer) but does not provide a link or explicit statement about the availability of its own code.
Open Datasets Yes We use six datasets, in which four synthetic datasets (BA-shapes,BA-Community,Tree Cycles and Tree-Grid) are used for the node classification task and two datasets (BA-2motifs and Mutagenicity) are used for the graph generation task. Details of these datasets are described in Appendix E.3 and the visualizations are shown in Figure 9. The BA-shapes data set consists of one Barabasi-Albert graph Barab asi & Albert (1999)...The Mutagenicity dataset is a real dataset...
Dataset Splits No Specifically, we vary the training set sized from {10%, 30%, 50%, 70%, 90%} and take the remaining instances for testing. The paper does not explicitly provide details for a validation split.
Hardware Specification Yes All experiments were conducted on a NVIDIA Quadro RTX 6000 environment with Pytorch.
Software Dependencies No All experiments were conducted on a NVIDIA Quadro RTX 6000 environment with Pytorch. Table 4 lists 'PyTorch' without a version number.
Experiment Setup Yes The parameters of GFlow Explainer are shown in Table 4. Table 4: Hyper-parameters: Batch Size 64, Number of layers of APPNP 3, α in APPNP 0.85, Hidden dimension 64, Architecture of MLP in L 64-8-1, Learning rate 1e-2, Optimizer Adam, Number of hops 3, Maximum size of generated sequences 20, Training epochs (node tasks) {50,100}, Training epochs (graph tasks) 100, Sample ratio of graph instance to train L 0.2