Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry
Authors: Maximillian Nickel, Douwe Kiela
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 4 we evaluate the efficiency of our approach on large taxonomies. Furthermore, we evaluate the ability of our model to discover meaningful hierarchies on real-world datasets. |
| Researcher Affiliation | Industry | 1Facebook AI Research, New York, NY, USA. |
| Pseudocode | Yes | Algorithm 1 Riemannian Stochastic Gradient Descent |
| Open Source Code | No | For Poincaré embeddings, we use the official open-source implementation.4 ... 4Source code available at https://github.com/ facebookresearch/poincare-embeddings. This link is for the baseline Poincaré embeddings, not for the Lorentz model described in this paper. |
| Open Datasets | Yes | Word Net R (Miller & Fellbaum, 1998) is a large lexical database... Euro Voc is a mulitlingual thesaurus maintained by the European Union... 2Available at http://eurovoc.europa.eu... Enron email corpus (Priebe et al., 2006)... This dataset has been created by Priebe et al. (2006) from the full Enron email corpus... lexical cognate data provided by Bouckaert et al. (2012). |
| Dataset Splits | No | Both methods were cross-validated over identical sets of hyperparameters. This implies cross-validation was used for hyperparameter tuning, but it does not specify explicit train/validation/test dataset splits (e.g., percentages, sample counts, or predefined partitions) for model training and evaluation. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper mentions using Python and the official implementation of Poincaré embeddings, but it does not provide specific version numbers for any software libraries or dependencies. |
| Experiment Setup | Yes | We initialize the embeddings close to the origin of Hn by sampling from the uniform distribution U( 0.001, 0.001) and by setting x0 according to Equation (6). Input Learning rate η, number of epochs T. |