Exploring the Role of Node Diversity in Directed Graph Representation Learning

Authors: Jincheng Huang, Yujie Mo, Ping Hu, Xiaoshuang Shi, Shangbo Yuan, Zeyu Zhang, Xiaofeng Zhu

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on seven real-world datasets validate the superior performance of our method compared to state-of-the-art methods in terms of both node classification and link prediction tasks.
Researcher Affiliation Academia Jincheng Huang1 , Yujie Mo1 , Ping Hu1 , Xiaoshuang Shi1 , Shangbo Yuan3 , Zeyu Zhang2 , Xiaofeng Zhu1 1School of Computer Science and Engineering, University of Electronic Science and Technology of China 2Huazhong Agricultural University 3School of Engineering and Design, Technical University of Munich
Pseudocode No The paper includes a flowchart (Figure 3) but does not provide pseudocode or a clearly labeled algorithm block.
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We evaluate the effectiveness of the proposed method on 2 homophilic datasets and 5 heterophilic datasets. Homophilic datasets include Cora-ML and Citeseer-Full [Bojchevski and G unnemann, 2018]. Heterophilic datasets include Chameleon, Squirrel [Pei et al., 2020], Roman-Empire [Platonov et al., 2023], Arxiv-Year [Leskovec et al., 2005], and Snap-Patents [Leskovec and Krevl, 2014].
Dataset Splits Yes Specifically, for the node classification task, we split all datasets in Dir-GNN [Rossi et al., 2023] and the detail can be found in the Appendix. For directed graph link prediction task, we remove 10% of edges for testing, 5% for validation, and use the rest of the edges for training.
Hardware Specification Yes We conduct all experiments on a server with Nvidia RTX 4090 (24GB memory each).
Software Dependencies No The paper mentions optimizing parameters by Adam optimization but does not provide specific version numbers for software dependencies (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes In the proposed method, we optimize all parameters by the Adam optimization [Kingma and Ba, 2015] with the learning rate in the range of {0.005, 0.01} and set the weight decay as 0. Moreover, we set the number of model layers in the range of {4, 5, 6}, set the dropout in the range of {0.0, 0.35, 0.5, 0.6}, and set the size of the hidden unit in the range of {32, 128, 256}. We set α for preserving the representation of the previous layer in the range of {0.0, 0.3, 0.5, 0.8, 1.0}, and set λ in our regularization term in the range of {0.0, 0.1, 0.2, 0.9}.