Residual Hyperbolic Graph Convolution Networks
Authors: Yangkai Xue, Jindou Dai, Zhipeng Lu, Yuwei Wu, Yunde Jia
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment results demonstrate the effectiveness of R-HGCNs under various graph convolution layers and different structures of product manifolds. Experiments are performed on the semi-supervised node classification task. We first evaluate the performance of R-HGCN under different configurations of models, including various graph convolution layers and different structures of product manifolds. Then, we compare with several state-of-the-art Euclidean GCNs and HGCNs, showing that R-HGCN achieves competitive results. Further, we compare with Drop Connect[Wan et al. 2013], a related regularization mathod for deep GCNs. Datasets PUBMED CITESEER CORA AIRPORT Classes 3 6 7 4 Nodes 19, 717 3, 327 2, 708 3, 188 Edges 44, 338 4, 732 5, 429 18, 631 Features 500 3, 703 1, 433 4 Table 1: Dataset statistics. Validation Experiments Here we demonstrate the effectiveness of the R-HGCN and our regularization method under different model configurations. |
| Researcher Affiliation | Academia | 1Beijing Key Laboratory of Intelligent Information Technology, School of Computer Science & Technology, Beijing Institute of Technology, China 2Guangdong Laboratory of Machine Perception and Intelligent Computing, Shenzhen MSU-BIT University, China xue.yangkai@bit.edu.cn, daijindou@foxmail.com, zhipeng.lu@hotmail.com, wuyuwei@bit.edu.cn, jiayunde@smbu.edu.cn |
| Pseudocode | No | The paper describes the mathematical formulations and operations of the proposed methods but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement or a link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We use four standard commonly-used citation network graph datasets : PUBMED, CITESEER, CORA and AIRPORT [Sen et al. 2008]. Dataset statistics are summarized in Table 1. |
| Dataset Splits | No | The paper mentions "Validation Experiments" and refers to "Experiment details see in the supplementary material" for details. However, the provided text does not explicitly state the training, validation, or test dataset splits (e.g., percentages or counts) or refer to standard predefined splits with sufficient detail for reproducibility. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. It only discusses the experimental setup at a conceptual level. |
| Software Dependencies | No | The paper does not list specific software dependencies with their version numbers that would be necessary to replicate the experiments. |
| Experiment Setup | No | The paper mentions that "α and β are hyper-parameters to control the weight of hyperbolic residual connection and hyperbolic identity mapping" and that "We set σ = η/(1 η) where η denotes the drop rate." It also states "Experiment details see in the supplementary material." However, the main text does not provide specific numerical values for hyperparameters or other detailed system-level training settings needed for exact replication. |