Hyperbolic Embeddings of Supervised Models
Authors: Richard Nock, Ehsan Amid, Frank Nielsen, Alexander Soen, Manfred Warmuth
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments are provided on readily available domains, and all proofs, additional results and additional experiments are given in an Appendix. |
| Researcher Affiliation | Collaboration | Richard Nock Google Research richardnock@google.com Ehsan Amid Google Deep Mind eamid@google.com Frank Nielsen Sony CS Labs, Inc. frank.nielsen@acm.org Alexander Soen RIKEN AIP Australian National University alexander.soen@anu.edu.au Manfred K. Warmuth Google Research manfred@google.com |
| Pseudocode | Yes | Algorithm 1 GETMDT(ν, b, ν1, I) |
| Open Source Code | Yes | Code provided with an example public domain and a resource file + README.txt for quick testing and validation. |
| Open Datasets | Yes | The domains we consider are all public domains, from the UCI repository of ML datasets [12], Open ML, or Kaggle, see Table I. |
| Dataset Splits | Yes | All test errors are estimated from a 10-fold stratified cross-validation. |
| Hardware Specification | No | The code runs on any standard computer. |
| Software Dependencies | No | No specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment are provided. |
| Experiment Setup | Yes | In the top-down induction scheme for DT, the leaf chosen to be split is the heaviest non pure leaf, i.e. the one among the leaves containing both classes with the largest proportion of training examples. Induction is stopped when the tree reaches a fixed size, or when all leaves are pure, or when no further split allows decreasing the expected log-loss. We do not prune DTs. |