Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Knowledge Hypergraph Embedding Meets Relational Algebra
Authors: Bahare Fatemi, Perouz Taslakian, David Vazquez, David Poole
JMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We verify experimentally that Re Al E outperforms state-of-the-art models in knowledge hypergraph completion, and in representing each of these primitive relational algebra operations. For the latter experiment, we generate a synthetic knowledge hypergraph, for which we design an algorithm based on the Erdős-Rényi model for generating random graphs. |
| Researcher Affiliation | Collaboration | Bahare Fatemi EMAIL University of British Columbia Vancouver, BC V6T 1Z4, Canada Perouz Taslakian EMAIL Service Now Research Montreal, QC H2S 3G9, Canada David Vazquez EMAIL Service Now Research Montreal, QC H2S 3G9, Canada David Poole EMAIL University of British Columbia Vancouver, BC V6T 1Z4, Canada |
| Pseudocode | Yes | Algorithm 1: generate_knowledge_hypergraph(V, R) Algorithm 2: generate_ground_truth(V, R, n_derived_tuples) Algorithm 3: synthesize_dataset(V, R, n_derived_tuples) Algorithm 4: Learning Re Al E |
| Open Source Code | Yes | Code and data is available at https://github.com/baharefatemi/Re Al E. |
| Open Datasets | Yes | We use three real-world datasets for our experiments: JF17K (Wen et al., 2016), and FB-auto and m-FB15K (Fatemi et al., 2020). |
| Dataset Splits | Yes | Given a knowledge hypergraph defined on τ , we let τ train, τ test, and τ valid denote the (pairwise disjoint) train, test, and validation sets, respectively, so that τ = τ train τ test τ valid where is disjoint set union. ... Finally, the complete algorithm to generate the train, valid, and test sets of the synthetic dataset is described in Algorithm 3 below. ... train, valid, test = randomly split relational_data into train, valid and test |
| Hardware Specification | Yes | For all the experiments we use a single 12GB GPU (NVIDIA Tesla P100 PCIe 12 GB). |
| Software Dependencies | No | The paper mentions PyTorch but does not specify a version number. Other mentioned tools (Adagrad, dropout) are techniques rather than specific software packages with versions in this context. |
| Experiment Setup | Yes | We fix the maximum number of epochs to 1000 and embedding size to 200. We tune lr (learning rate) and w (window size) using the sets {0.05,0.08,0.1,0.2}, and {1,2,4,5,8} (first five divisors of 200). We tune σ (nonlinear function) using the set {tanh,sigmoid,exponent} for the JF17K dataset. ... For the experiment on REL-ER and also in Section C.1, we fixed the negative ratio and batch size of all baselines and our model to 10 and 128 respectively. |