Hyperbolic Disk Embeddings for Directed Acyclic Graphs
Authors: Ryota Suzuki, Ryusuke Takahama, Shun Onoda
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The results of our experiments show that our Disk Embedding models outperform existing methods especially in complex DAGs other than trees. Experimentally, we demonstrate that our methods outperform all of the baseline methods, especially for DAGs other than tree. |
| Researcher Affiliation | Industry | 1LAPRAS Inc., Tokyo, Japan. Correspondence to: Ryota Suzuki <suzuki@lapras.com>. |
| Pseudocode | No | The paper includes mathematical formulations for optimization (e.g., update formulae for RSGD) but does not present them in a clearly labeled 'Algorithm' or 'Pseudocode' block. |
| Open Source Code | Yes | For reproducibility, our source code for the experiments are publicly available online6. 6 https://github.com/lapras-inc/disk-embedding |
| Open Datasets | Yes | For evaluation we use the Word Net R (Miller, 1995)4, a large lexical database that provides hypernymy relations. 4 https://wordnet.princeton.edu/ |
| Dataset Splits | No | The paper mentions using WordNet for evaluation and conducting learning by sampling positive and negative pairs. It also mentions 'validation data' in relation to baseline methods. However, it does not specify explicit train/validation/test splits by percentage, sample count, or reference to predefined splits for their own model's setup. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper states 'For these baseline methods, we used the implementation reported by Ganea et al.5.' and 'experimental conditions such as hyperparameters are designed to be nearly similar to those of the experiments conducted by Ganea et al.' It does not specify versions for its own implementation's software dependencies (e.g., Python, PyTorch/TensorFlow versions). |
| Experiment Setup | No | The paper mentions 'experimental conditions such as hyperparameters are designed to be nearly similar to those of the experiments conducted by Ganea et al.' but does not explicitly state the specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations used for their own experiments. |