Hyperbolic Variational Graph Neural Network for Modeling Dynamic Graphs
Authors: Li Sun, Zhongbao Zhang, Jiawei Zhang, Feiyang Wang, Hao Peng, Sen Su, Philip S. Yu4375-4383
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that HVGNN outperforms stateof-the-art baselines on real-world datasets. We evaluate HVGNN by link prediction and node classification on several datasets. We repeat each experiment 10 times and report the mean with the standard deviations. |
| Researcher Affiliation | Academia | 1State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, China 2IFM Lab, Department of Computer Science, Florida State University, FL, USA 3Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, China 4Department of Computer Science, University of Illinois at Chicago, IL, USA |
| Pseudocode | Yes | Algorithm 1: Reparametrisable Sampling |
| Open Source Code | No | Both results and source code will be released to the public, and others who otherwise have limited access to the models can use our open-source materials in their researches or applications. |
| Open Datasets | Yes | We choose three real-world datasets, i.e., Reddit (Xu et al. 2020), Wikipedia (Kumar, Zhang, and Leskovec 2019) and DBLP (Zhou et al. 2018a). |
| Dataset Splits | Yes | We do chronological train-validation-test split with 80% 5% 15% according to the timestamps. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names with versions). |
| Experiment Setup | Yes | Node representations are initialized as its raw feature. We do chronological train-validation-test split with 80% 5% 15% according to the timestamps. The graph models are trained by minimizing the cross-entropy loss using negative sampling. We randomly sample an equal amount of negative node pairs to the positive links... In the experiment, we stack the corresponding attention layer twice in the models above. |