Numerically Accurate Hyperbolic Embeddings Using Tiling-Based Models

Authors: Tao Yu, Christopher M. De Sa

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our tiling-based model empirically, and show that it can both compress hyperbolic embeddings (down to 2% of a Poincaré embedding on Word Net Nouns) and learn more accurate embeddings on real-world datasets. In Section 7, evaluate our methods on two different tasks: (1) compressing a learned embedding and (2) learning embeddings on multiple real-world datasets.
Researcher Affiliation Academia Tao Yu Department of Computer Science Cornell University Ithaca, NY, USA tyu@cs.cornell.edu Christopher De Sa Department of Computer Science Cornell University Ithaca, NY, USA cdesa@cs.cornell.edu
Pseudocode Yes Algorithm 1 Map Lorentz model to L-tiling model. Algorithm 2 RSGD in the L-tiling model.
Open Source Code Yes We release our compression code in Julia and learning code in Py Torch publicly for reproducibility. https://github.com/ydtydr/Hyperbolic_Tiling_Compression https://github.com/ydtydr/Hyperbolic_Tiling_Learning
Open Datasets Yes We evaluate our tiling-based model empirically, and show that it can both compress hyperbolic embeddings (down to 2% of a Poincaré embedding on Word Net Nouns) and learn more accurate embeddings on real-world datasets. Datasets Nodes Edges Bio-yeast[29] 1458 1948 Word Net[14] 74374 75834 Nouns 82115 769130 Verbs 13542 35079 Mammals 1181 6541 Gr-QC[23] 4158 13422.
Dataset Splits No The paper mentions sampling negative examples during training and using standard metrics, but it does not specify explicit training/validation/test splits, percentages, or validation set usage.
Hardware Specification No The paper states 'All models were trained in float64 for 1000 epochs,' but provides no specific details regarding the hardware used (e.g., GPU models, CPU types, or cloud resources).
Software Dependencies No The paper mentions using 'Julia' and 'Py Torch' for code release, but it does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes We randomly sample |N(u)| = 50 negative examples per positive example during training. All models were trained in float64 for 1000 epochs with the same hyper-parameters.