Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

GAEN: Graph Attention Evolving Networks

Authors: Min Shi, Yu Huang, Xingquan Zhu, Yufei Tang, Yuan Zhuang, Jianxun Liu

IJCAI 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments and validations, on four real-world dynamic graphs, demonstrate that GAEN outperforms the state-of-the-art in both link prediction and node classification tasks.
Researcher Affiliation Academia Min Shi1 , Yu Huang1 , Xingquan Zhu1 , Yufei Tang1 , Yuan Zhuang2 and Jianxun Liu3 1Dept. of Computer & Elec. Engineering and Computer Science, Florida Atlantic University, USA 2 State Key Lab of Info. Eng. in Surveying, Mapping and Remote Sensing, Wuhan University, China 3School of Computer Science and Engineering, Hunan University of Science and Technology, China EMAIL, EMAIL
Pseudocode Yes The detailed training procedure of GAEN is summarized in Algorithm 1.
Open Source Code Yes For detailed parameter settings, please refer to the Git Hub link2. 2https://github.com/codeshareabc/GAEN
Open Datasets Yes We adopt four temporal networks Enron [Klimt and Yang, 2004], UIC [Panzarasa et al., 2009], Primary School [Stehl e et al., 2011] and DBLP1 summarized in Table 1. 1https://dblp.uni-trier.de
Dataset Splits Yes For link prediction, 20% of the links are used as validation to fine-tune the hyper-parameters, and the remaining are split as 25% and 75% for training and test. For node classification, 20% of nodes are used for validation. Then, 30% and 70% of the remaining nodes are respectively used for training.
Hardware Specification No The paper does not specify the exact hardware (e.g., GPU model, CPU model, memory) used for running the experiments.
Software Dependencies No The paper discusses methods and models (e.g., GRU, GAT, GCN) but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes To train the model, the number of attention heads is set as 8, the hidden dimension in GRU networks is set as 128 and the learning rate is set as 1e-4.