Optimization-Induced Graph Implicit Nonlinear Diffusion

Authors: Qi Chen, Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that GIND is good at capturing long-range dependencies, and performs well on both homophilic and heterophilic graphs with nonlinear diffusion.
Researcher Affiliation Academia 1School of Mathematical Sciences, Peking University, China 2Key Lab. of Machine Perception (Mo E), School of Artificial Intelligence, Peking University, China 3Institute for Artificial Intelligence, Peking University, China 4Peng Cheng Laboratory, China.
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/7qchen/GIND.
Open Datasets Yes We adopt the 5 heterophilic datasets: Cornell, Texas, Wisconsin, Chameleon and Squirrel (Pei et al., 2019). And for homophilic datasets, we adopt 3 citation datasets: Cora, Cite Seer and Pub Med. For graph classification, we choose a total of 5 bioinformatics benchmarks: MUTAG, PTC, COX2, NCI1 and PROTEINS (Yanardag & Vishwanathan, 2015).
Dataset Splits Yes For other datasets except PPI, we adopt the standard data split as Pei et al. (2019) and report the average performance over the 10 random splits. While for PPI, we follow the train/validation/test split used in Graph SAGE (Hamilton et al., 2017). Following identical settings as Yanardag & Vishwanathan (2015), we conduct 10-fold cross-validation with LIB-SVM (Chang & Lin, 2011) and report the average prediction accuracy and standard deviations in Table 4.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running its experiments.
Software Dependencies No We implement our GIND based on the Py Torch Geometric library (Fey & Lenssen, 2019). We conduct 10-fold cross-validation with LIB-SVM (Chang & Lin, 2011).
Experiment Setup Yes In terms of hyperparameters, we tune learning rate, weight decay, α and iteration steps through the Tree-structured Parzen Estimator approach (Akiba et al., 2019). We use a 4-layer model for PPI and a 3-layer model for the two large datasets, Chameleon and Squirrel, as well as all the datasets used for graph-level tasks. For the rest datasets, we adopt the model with only one layer. We use linear output function for all the node-level tasks, and MLP for all the graph-level tasks. We adopt the layer normalization (LN) (Ba et al., 2016) for all the node-level tasks and instance normalization (IN) (Ulyanov et al., 2016) for all the graph-level tasks.