Hyperbolic Graph Neural Networks at Scale: A Meta Learning Approach

Authors: Nurendra Choudhary, Nikhil Rao, Chandan Reddy

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our comparative analysis shows that H-GRAM effectively learns and transfers information in multiple challenging few-shot settings compared to other state-of-the-art baselines. Additionally, we demonstrate that, unlike standard HNNs, our approach is able to scale over large graph datasets and improve performance over its Euclidean counterparts.
Researcher Affiliation Collaboration Nurendra Choudhary Virginia Tech Arlington, VA, USA nurendra@vt.edu Microsoft Sunnyvale, CA, USA nikhilrao@microsoft.com Chandan K. Reddy Virginia Tech Arlington, VA, USA reddy@cs.vt.edu
Pseudocode No The paper describes the model and procedures using text and mathematical equations, but it does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Our meta-learning procedure is further detailed in Appendix C and the implementation code with our experiments is available at https://github.com/Akirato/HGRAM.
Open Datasets Yes For the task of meta-learning, we utilize the experimental setup from earlier approaches [18]; two synthetic datasets to understand if H-GRAM is able to capture local graph information and five real-world datasets to evaluate our model s performance in a few-shot setting. Synthetic Cycle [18], Synthetic BA [18], ogbn-arxiv [17], Tissue-PPI [16, 43], First MM-DB [25], Fold-PPI [43], Tree-of-Life [44]. For comparison with HNNs, we utilize the standard benchmark citation graphs of Cora [29], Pubmed [24], and Citeseer [29].
Dataset Splits Yes The other hyper-parameters were selected based on the best performance on the validation set (Dval) under the given computational constraints. The evaluation metric for both the tasks of node classification and link prediction is accuracy A = |Y = ˆY |/|Y |. For robust comparison, the metrics are computed over five folds of validation splits in a 2-shot setting for node classification and 32-shot setting for link prediction.
Hardware Specification Yes Our experiments are conducted on a Nvidia V100 GPU with 16 GB of VRAM.
Software Dependencies No H-GRAM is primarily implemented in Pytorch [26], with geoopt [21] and Graph Zoo [35] as support libraries for hyperbolic formulations. The paper mentions software names but does not provide specific version numbers for these dependencies.
Experiment Setup Yes For gradient descent, we employ Riemannian Adam [28] with an initial learning rate of 0.01 and standard β values of 0.9 and 0.999. The other hyper-parameters were selected based on the best performance on the validation set (Dval) under the given computational constraints. In our experiments, we empirically set k = 2, d = 32, h = 4, and = 10. We explore the following search space and tune our hyper-parameters for best performance. The number of tasks in each batch are varied among 4, 8, 16, 32, and 64. The learning rate explored for both HNN updates and meta updates are 10 2, 5 10 3, 10 3 and 5 10 4. The size of hidden dimensions are selected from among 64, 128, and 256. The final best-performing hyper-parameter setup for real-world datasets is presented in Table 5.