GAEN: Graph Attention Evolving Networks

Authors: Min Shi, Yu Huang, Xingquan Zhu, Yufei Tang, Yuan Zhuang, Jianxun Liu

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments and validations, on four real-world dynamic graphs, demonstrate that GAEN outperforms the state-of-the-art in both link prediction and node classification tasks.
Researcher Affiliation Academia Min Shi1 , Yu Huang1 , Xingquan Zhu1 , Yufei Tang1 , Yuan Zhuang2 and Jianxun Liu3 1Dept. of Computer & Elec. Engineering and Computer Science, Florida Atlantic University, USA 2 State Key Lab of Info. Eng. in Surveying, Mapping and Remote Sensing, Wuhan University, China 3School of Computer Science and Engineering, Hunan University of Science and Technology, China {mshi2018, yhwang2018, xzhu3, tangy}@fau.edu, {zhy.0908, ljx529}@gmail.com
Pseudocode Yes The detailed training procedure of GAEN is summarized in Algorithm 1.
Open Source Code Yes For detailed parameter settings, please refer to the Git Hub link2. 2https://github.com/codeshareabc/GAEN
Open Datasets Yes We adopt four temporal networks Enron [Klimt and Yang, 2004], UIC [Panzarasa et al., 2009], Primary School [Stehl e et al., 2011] and DBLP1 summarized in Table 1. 1https://dblp.uni-trier.de
Dataset Splits Yes For link prediction, 20% of the links are used as validation to fine-tune the hyper-parameters, and the remaining are split as 25% and 75% for training and test. For node classification, 20% of nodes are used for validation. Then, 30% and 70% of the remaining nodes are respectively used for training.
Hardware Specification No The paper does not specify the exact hardware (e.g., GPU model, CPU model, memory) used for running the experiments.
Software Dependencies No The paper discusses methods and models (e.g., GRU, GAT, GCN) but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes To train the model, the number of attention heads is set as 8, the hidden dimension in GRU networks is set as 128 and the learning rate is set as 1e-4.