Capacity and Bias of Learned Geometric Embeddings for Directed Graphs
Authors: Michael Boratko, Dongxu Zhang, Nicholas Monath, Luke Vilnis, Kenneth L Clarkson, Andrew McCallum
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform rigorous empirical evaluations of vector, hyperbolic, and region-based geometric representations on several families of synthetic and realworld directed graphs. |
| Researcher Affiliation | Collaboration | 1 University of Massachusetts Amherst 2 IBM Research |
| Pseudocode | No | The paper describes methods and models using mathematical equations and textual explanations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and data are available at https://github.com/iesl/geometric_graph_embedding. |
| Open Datasets | Yes | Graphs Real World. We also select the following real world graph datasets: Word Net (Animals) [34]... Hierarchical Clustering. We run agglomerative clustering on the Inception V3 [56] features from 213 Image Net images [47]. |
| Dataset Splits | No | We check the training loss ten times per epoch, and apply early-stopping with a patience of just over 2 epochs (21 loss observations). The paper mentions early stopping which implies a validation set, but it does not specify explicit training/validation/test splits (e.g., percentages or sample counts) for the datasets used. |
| Hardware Specification | No | The paper mentions 'high performance computing equipment' in the acknowledgements but does not provide specific hardware details such as exact GPU/CPU models or memory amounts used for experiments. |
| Software Dependencies | No | The paper mentions using W&B [5] for hyperparameter optimization but does not provide specific version numbers for software dependencies or libraries used in the implementation. |
| Experiment Setup | Yes | All models are tuned on learning rate, batch size, and weight of negative loss. We tune the margin γ for OE, and β parameters in (9) for HYPERBOLIC, intersection and volume temperature for BOX, and the initialization of these temperatures for T-BOX. We check the training loss ten times per epoch, and apply early-stopping with a patience of just over 2 epochs (21 loss observations). |