Hyperbolic Representation Learning: Revisiting and Advancing

Authors: Menglin Yang, Min Zhou, Rex Ying, Yankai Chen, Irwin King

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments across various models and different tasks demonstrate the versatility and adaptability of the proposed method. Remarkably, our method achieves a remarkable improvement of up to 21.4% compared to the competing baselines.
Researcher Affiliation Collaboration 1Department of Computer Sciences and Engineering, The Chinese University of Hong Kong 2Huawei Technologies Co., Ltd. 3Yale University.
Pseudocode Yes Algorithm 1 Hyperbolic Informed Embedding (HIE)
Open Source Code No The paper does not contain an explicit statement about releasing its source code, nor does it provide a link to a repository.
Open Datasets Yes we perform evaluations on four public available datasets, namely DISEASE, AIRPORT, CORA, CITESEER and. For more details about these datasets, please refer to Appendix F.1. Table 7. Statistics of the datasets.
Dataset Splits Yes For the link prediction task, we randomly split the edges in the DISEASE dataset into training (75%), validation (5%), and test (20%) sets for the shallow models. For node classification, we split the nodes in the AIRPORT dataset into 70%, 15%, and 15%, and the nodes in the DISEASE dataset into 30%, 10%, and 60%.
Hardware Specification No The paper only states 'on the NVIDIA GPUs' without specifying the model or other hardware details.
Software Dependencies No The paper mentions 'Py Torch' and 'Adam' but does not provide specific version numbers for these software components.
Experiment Setup Yes For all models, we traverse the number of embedding dimensions from 8, 64, 256 and then perform a hyper-parameter search on a validation set over learning rate {0.01, 0.02, 0.005}, weight decay {1e 4, 5e 4, 5e 5}, dropout {0.1, 0.2, 0.5, 0.6}, and the number of layers {1, 2, 3, 4, 5}. We also adopt the early stopping strategies based on the validation set with patience in {100, 200, 500}.