Random Laplacian Features for Learning with Hyperbolic Space

Authors: Tao Yu, Christopher De Sa

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 6, we evaluate our approach empirically. Our Hy La-networks demonstrate better performance, scalability and computation speed than existing hyperbolic networks: Hy La-networks consistently outperform HGCN, even on a tree dataset, with 12.3% improvement while being 4.4 faster.
Researcher Affiliation Academia Anonymous authors Paper under double-blind review
Pseudocode Yes Algorithm 1 End-to-End Hy La
Open Source Code No The paper does not provide a direct link or explicit statement about the availability of its own source code. It mentions 'publicly released version' for baselines like HGCN, but not for Hy La.
Open Datasets Yes We use transductive datasets: Cora, Citeseer and Pubmed (Sen et al., 2008), which are standard citation networks benchmarks, following the standard splits adopted in Kipf & Welling (2016).
Dataset Splits Yes We follow the standard splits Kipf & Welling (2016) with 20 nodes per class for training, 500 nodes for validation and 1000 nodes for test.
Hardware Specification Yes We measure the training time on a NVIDIA Ge Force RTX 2080 Ti GPU and show the specific timing statistics in Appendix.
Software Dependencies No The paper mentions optimizers like 'Riemannian SGD optimizer Bonnabel (2013)' and 'Adam Kingma & Ba (2014) optimizer' but does not specify software versions for any libraries or frameworks used in the implementation.
Experiment Setup Yes We provide the detailed values of hyper-parameters for node classification and text classification in Table 5 and Table 6 respectively.