SEA-GWNN: Simple and Effective Adaptive Graph Wavelet Neural Network

Authors: Swakshar Deb, Sejuti Rahman, Shafin Rahman

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we benchmark our model on several real-world datasets spanning four distinct categories, including citation networks, webpages, the film industry, and large-scale graphs and the experimental results showcase the efficacy of the proposed SEA-GWNN.
Researcher Affiliation Academia Swakshar Deb1, Sejuti Rahman1 *, Shafin Rahman2 1Department of Robotics and Mechatronics Engineering, University of Dhaka, Bangladesh 2Department of Electrical and Computer Engineering, North South University, Bangladesh swakshar.sd@gmail.com, sejuti.rahman@du.ac.bd, shafin.rahman@northsouth.edu
Pseudocode Yes Algorithm 1: SEA-GWNN model training.
Open Source Code No The calculation of U and P i.e., lifting operators, can be parallelly executed for all the edges without any computationally intensive matrix operations such as eigendecompositions. Moreover, owing to the inherent sparsity of graphs (e.g. A and U), the time complexity of message propagation with Py Torch geometry (Fey and Lenssen 2019) yields O(|E|d) at each layer.
Open Datasets Yes Dataset Description: We use three common homophilic citation graphs Cora, Citeseer, and Pub Med (Sen et al. 2008) and five heterophilic datasets namely Cornell, Wisconsin (Pei et al. 2019), Film (Tang et al. 2009), Chameleon and Squirrel. Moreover, we utilize four large-scale graphs namely Penn94 (Traud, Mucha, and Porter 2012) Genius, and Ogbn-Arxiv (Hu et al. 2020).
Dataset Splits Yes Similar to (Pei et al. 2019), the nodes in each class were randomly split into three groups for training, validation, and testing in a ratio of 48%, 32%, 20%.
Hardware Specification Yes To ensure a fair comparison, all models are evaluated on an NVIDIA RTX 3080 GPU.
Software Dependencies No Moreover, owing to the inherent sparsity of graphs (e.g. A and U), the time complexity of message propagation with Py Torch geometry (Fey and Lenssen 2019) yields O(|E|d) at each layer.
Experiment Setup Yes Specifically, weight decay factors were explored in the range of [0.01, 0.000005], while dropout probabilities were selected from [0, 0.9] with a step of 0.1. The hyperparameters were fine-tuned by monitoring their performance on the validation sets. Training continued for a maximum of 1000 epochs, with the help of an early stopping set at 150 epochs.