Tree! I am no Tree! I am a low dimensional Hyperbolic Embedding
Authors: Rishi Sonthalia, Anna Gilbert
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show empirically that TREEREP is not only many orders of magnitude faster than previously known algorithms, but also produces metrics with lower average distortion and higher mean average precision than most previous algorithms for learning hyperbolic embeddings, extracting hierarchical information, and approximating metrics via tree metrics. |
| Researcher Affiliation | Academia | Rishi Sonthalia Department of Mathematics University of Michigan Ann Arbor, MI, 48104 rsonthal@umich.edu Anna C. Gilbert Department of Statistics and Data Science Yale University New Haven, CT, 06510 anna.gilbert@yale.edu |
| Pseudocode | Yes | Algorithm 1 and 2 present a high level version of the pseudo-code. The complete pseudo-code for TREEREP is presented in Appendix 13. |
| Open Source Code | Yes | All code can be found at the following link https://github.com/rsonthal/TreeRep |
| Open Datasets | Yes | First, we create synthetic data sets by sampling random points from Hk. Second, we will take real world biological data sets that are believed to have hierarchical structure. Third, we consider metrics that come from real world unweighted graphs. ... immunological distances from Sarich [37]. ... the Zeisel and CBMC sc RNA-seq data set [44, 40]. ... eight well known graph data sets from [34]. |
| Dataset Splits | No | The paper does not explicitly mention training, validation, or test dataset splits. The evaluation is focused on comparing algorithm performance on entire datasets. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., CPU/GPU models, memory specifications). |
| Software Dependencies | No | The paper does not provide specific version numbers for any ancillary software components or libraries used in the experiments. |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as hyperparameters, optimizer settings, or other system-level training configurations in the main text. It mentions that "Additional details about the experiments and algorithms can be found in Appendix 12," but these are not in the main paper. |