Graph Ordering Attention Networks

Authors: Michail Chatzianastasis, Johannes Lutzeyer, George Dasoulas, Michalis Vazirgiannis

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform an extensive evaluation of our GOAT model and compare against a wide variety of state-of-the-art GNNs, on three synthetic datasets as well as on nine node-classification benchmarks.
Researcher Affiliation Academia Ecole Polytechnique, Institut Polytechnique de Paris, France 2 Harvard University, Cambridge, MA, USA
Pseudocode No The paper describes the architecture of the GOAT layer and its components using textual descriptions and a diagram, but does not provide any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is publicly available on Git Hub1. 1Code: https://github.com/Michail Chatzianastasis/GOAT
Open Datasets Yes We utilize nine well-known node classification benchmarks to validate our proposed model in real-world scenarios originating from a variety of different applications. Specifically, we use 3 citation network benchmark datasets: Cora, Cite Seer (Sen et al. 2008), ogbn-arxiv (Hu et al. 2020), 1 disease spreading model: Disease (Chami et al. 2019), 1 social network: Last FM Asia (Rozemberczki and Sarkar 2020), 2 co-purchase graphs: Amazon Computers, Amazon Photo (Shchur et al. 2019) and 2 co-authorship graphs: Coauthor CS,Physics (Shchur et al. 2019).
Dataset Splits Yes We use 60/20/20 percent of nodes for training, validation and testing. We perform a hyperparameter search for all models on a validation set.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions the use of the Adam optimizer but does not specify software dependencies like programming languages or libraries with version numbers.
Experiment Setup Yes We use the Adam optimizer (Kingma and Ba 2015) with an initial learning rate of 0.005 and early stopping for all models and datasets. We perform a hyperparameter search for all models on a validation set. The hyperparameters include the size of hidden dimensions, dropout, and number of attention heads for GAT and GOAT. We fix the number of layers to 2.