A Differential Geometric View and Explainability of GNN on Evolving Graphs

Authors: Yazheng Liu, Xi Zhang, Sihong Xie

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on node classification, link prediction, and graph classification tasks with evolving graphs demonstrate the better sparsity, faithfulness, and intuitiveness of the proposed method over the state-of-the-art methods.
Researcher Affiliation Academia Key Laboratory of Trustworthy Distributed Computing and Service (Mo E), BUPT, Beijing, China Department of Computer Science and Engineering, Lehigh University, Bethlehem, PA, USA {liuyz,zhangx}@bupt.edu.cn, xiesihong1@gmail.com
Pseudocode Yes Algorithm 1 Compute Cp,j for a target node J.
Open Source Code No The paper does not contain any explicit statement about releasing source code or a direct link to a code repository for the described methodology.
Open Datasets Yes We study node classification task on evolving graphs on the Yelp Chi, Yelp NYC, Yelp Zip Rayana & Akoglu (2015), Pheme Zubiaga et al. (2017) and Weibo Ma et al. (2018) datasets, and study the link prediction task on the BC-OTC, BC-Alpha, and UCI datasets. These datasets have time stamps and the graph evolutions can be identified. The molecular data (MUTAG Debnath et al. (1991) is used for the graph classification.
Dataset Splits No The paper mentions using a "training set" but does not specify explicit train/validation/test dataset splits (e.g., percentages, absolute sample counts, or citations to predefined splits) to reproduce the experiment.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory, cloud instance types) used to run its experiments.
Software Dependencies No The paper mentions using the "cvxpy library Diamond & Boyd" but does not specify a version number for cvxpy or any other key software components used in the experiments.
Experiment Setup Yes We set the learning rate to 0.01, the dropout to 0.2 and the hidden size to 16 when we train the GNN model.