Directed Graph Embeddings in Pseudo-Riemannian Manifolds

Authors: Aaron Sim, Maciej L Wiatrak, Angus Brayne, Paidi Creed, Saee Paliwal

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we show that general directed graphs can be effectively represented by an embedding model that combines three components... We demonstrate the representational capabilities of this method by applying it to the task of link prediction on a series of synthetic and real directed graphs from natural language applications and biology. In this section we evaluate the quality of pseudo-Riemannian embeddings via a series of graph reconstruction and link prediction experiments.
Researcher Affiliation Industry 1Benevolent AI, London, United Kingdom. Correspondence to: Aaron Sim <aaron.sim@benevolent.ai>.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions that 'The models in this paper were implemented in JAX (Bradbury et al., 2018).' but does not provide a specific link or explicit statement for the open-source code of their methodology.
Open Datasets Yes Duplication Divergence Model: A two-parameter model that simulates the growth and evolution of large proteinprotein interaction networks (Ispolatov et al., 2005). DREAM5: Gold standard edges from a genome-scale network inference challenge, comprising of a set of gene regulatory networks across organisms and an in silico example (Marbach et al., 2012). Word Net: An acyclic, hierarchical, tree-like network of nouns... We use the same train / validation / test split as in Suzuki et al. (2019) and Ganea et al. (2018).
Dataset Splits Yes We use the same train / validation / test split as in Suzuki et al. (2019) and Ganea et al. (2018).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions that 'The models in this paper were implemented in JAX (Bradbury et al., 2018).' but does not provide a specific version number for JAX or other software dependencies.
Experiment Setup Yes We fix a negative sampling ratio of 4 throughout. Similar to Nickel & Kiela (2017), we initialize our embeddings in a small random patch near the origin (x = (1, 0, . . . , 0) for Ad S) and perform a burn-in phase of several epochs with the learning rate scaled by a factor of 0.01.