Hyperbolic Graph Convolutional Neural Networks

Authors: Ines Chami, Zhitao Ying, Christopher Ré, Jure Leskovec

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that HGCN learns embeddings that preserve hierarchical structure, and leads to improved performance when compared to Euclidean analogs, even with very low dimensional embeddings: compared to state-of-the-art GCNs, HGCN achieves an error reduction of up to 63.1% in ROC AUC for link prediction and of up to 47.5% in F1 score for node classification, also improving state-of-the art on the Pubmed dataset.
Researcher Affiliation Academia Department of Computer Science, Stanford University Institute for Computational and Mathematical Engineering, Stanford University {chami, rexying, chrismre, jure}@cs.stanford.edu
Pseudocode No The paper describes algorithms and operations but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Project website with code and data: http://snap.stanford.edu/hgcn
Open Datasets Yes CORA [36] and PUBMED [27] are standard benchmarks describing citation networks... Disease propagation tree. We simulate the SIR disease spreading model [2]... Protein-protein interactions (PPI) networks. PPI is a dataset of human PPI networks [37]... AIRPORT is a transductive dataset where nodes represent airports and edges represent the airline routes as from Open Flights.org.
Dataset Splits Yes In transductive LP tasks, we randomly split edges into 85/5/10% for training, validation and test sets. For transductive NC, we use 70/15/15% splits for AIRPORT, 30/10/60% splits for DISEASE, and we use standard splits [21, 46] with 20 train examples per class for CORA and PUBMED.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments (e.g., CPU/GPU models, memory).
Software Dependencies No The paper mentions optimizers like Adam and Riemannian SGD but does not specify software dependencies with version numbers (e.g., Python, PyTorch, specific library versions).
Experiment Setup Yes For all methods, we perform a hyper-parameter search on a validation set over initial learning rate, weight decay, dropout4, number of layers, and activation functions... We optimize all models with Adam [19], except Poincar e embeddings which are optimized with Riemannian SGD [4, 48].