ProtGNN: Towards Self-Explaining Graph Neural Networks

Authors: Zaixi Zhang, Qi Liu, Hao Wang, Chengqiang Lu, Cheekong Lee9127-9135

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we evaluate our method on a wide range of datasets and perform concrete case studies. Extensive results show that Prot GNN and Prot GNN+ can provide inherent interpretability while achieving accuracy on par with the noninterpretable counterparts.
Researcher Affiliation Collaboration 1 Anhui Province Key Lab. of Big Data Analysis and Application, School of Computer Science and Technology, University of Science and Technology of China 2 Tencent America
Pseudocode Yes Algorithm 1: Overview of Prot GNN/Prot GNN+ Training
Open Source Code Yes The implementation is publicly available at https://github.com/zaixizhang/Prot GNN.
Open Datasets Yes MUTAG (Debnath et al. 1991) and BBBP (Wu et al. 2018) are molecule datasets for graph classification...Graph-SST2 (Socher et al. 2013) and Graph-Twitter (Dong et al. 2014) are sentiment graph datasets...BA-Shape is a synthetic node classification dataset.
Dataset Splits Yes The split for train/validation/test sets is 80% : 10% : 10%.
Hardware Specification Yes All our experiments are conducted with one Tesla V100 GPU.
Software Dependencies No The paper mentions using the ADAM optimizer, GCN, GAT, GIN, and Bert word embeddings but does not provide specific version numbers for any software libraries or frameworks (e.g., PyTorch, TensorFlow, scikit-learn).
Experiment Setup Yes All models are trained for 500 epochs with an early stopping strategy based on accuracy on the validation set. We adopt the ADAM optimizer with a learning rate of 0.005. In Eq.(3), the hyperparameters λ1, λ2, and λ3 are set to 0.10, 0.05, and 0.01 respectively. smax is set to 0.3 in Eq. (6). The number of prototypes per class m is set to 5. In MCTS for prototype projection, we set λ in Eq. (9) to 5 and the number of iterations to 20. Each node in the Monte Carlo Tree can expand up to 10 child nodes and Nmin is set to 5. The prototype projection period τ is set to 50 and the projection epoch Tp is set to 100. In the training of Prot GNN+, the warm-up epoch Tw is set to 200. We employ a three-layer neural network to learn edge weights. In Eq. (14), λb is set to 0.01 and B is set to 10.