RAW-GNN: RAndom Walk Aggregation based Graph Neural Network
Authors: Di Jin, Rui Wang, Meng Ge, Dongxiao He, Xiang Li, Wei Lin, Weixiong Zhang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results showed that the new method achieved state-of-the-art performance on a variety of homophily and heterophily graphs.4 Experiments We now compare our RAW-GNN with the state-of-the-art models on the problems of node classification and visualization using seven real-world datasets varying from strong homophily to strong heterophily. |
| Researcher Affiliation | Collaboration | Di Jin1 , Rui Wang1 , Meng Ge1 , Dongxiao He1 , Xiang Li2 , Wei Lin2 and Weixiong Zhang3 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2Meituan, Beijing, China 3Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong |
| Pseudocode | No | The paper describes the architecture of RAW-GNN, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement about making its source code publicly available or provide any links to a code repository. |
| Open Datasets | Yes | Cora, Citeseer and Pubmed are homophilic citation network benchmark datasets [Sen et al., 2008; Namata et al., 2012], Cornell, Texas and Wisconsin are webpage datasets collected from computer science departments of corresponding universities [Pei et al., 2020], Actor is a heterophilic actor co-occurrence network [Tang et al., 2009]. |
| Dataset Splits | Yes | In each dataset, 48% of the nodes are used as the training set, 32% of the nodes are used as the validation set, and the rest as the test set. |
| Hardware Specification | No | The paper does not specify the hardware used for running experiments (e.g., GPU models, CPU types, or memory). |
| Software Dependencies | No | The paper mentions "Pytorch" but does not provide specific version numbers for Pytorch or any other software dependencies. |
| Experiment Setup | Yes | For the random walk sampling in RAW-GNN, we use DFS strategy with p = 10, q = 0.1 and BFS strategy with p = 0.1, q = 10. We choose different path lengths from {3, 4, 5, 6, 7} for different datasets. With every strategy, we sample 6 paths for each node in one epoch. For the RNN-based aggregator, we use GRU with 32 hidden units and attention head number is set to 2. The learning rate is set to 0.05. We adopt the Adam optimizer and the default initialization in Pytorch. |