InGram: Inductive Knowledge Graph Embedding via Relation Graphs

Authors: Jaejun Lee, Chanyoung Chung, Joyce Jiyoung Whang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that INGRAM outperforms 14 different state-of-the-art methods on varied inductive learning scenarios.
Researcher Affiliation Academia 1School of Computing, KAIST, Daejeon, South Korea. Correspondence to: Joyce Jiyoung Whang <jjwhang@kaist.ac.kr>.
Pseudocode Yes Algorithm 1 Embeddings via INGRAM at Inference Time
Open Source Code Yes 1https://github.com/bdi-lab/In Gram
Open Datasets Yes We create 12 datasets using three benchmarks, NELL995 (Xiong et al., 2017),Wikidata68K (Gesese et al., 2022), and FB15K237 (Toutanova & Chen, 2015).
Dataset Splits Yes Einf is divided into three pairwise disjoint sets, such that Einf := Finf Tval Ttest with a ratio of 3:1:1. ... We divide Etr into Ftr and Ttr with a ratio of 3:1.
Hardware Specification Yes All experiments were conducted with Ge Force RTX 2080 Ti, Ge Force RTX 3090 or RTX A6000, depending on the implementations of each method.
Software Dependencies No The paper mentions using 'the official C++ implementation of node2vec' but does not provide specific version numbers for this or any other software component used in the experiments.
Experiment Setup Yes We set d = 32 and bd = 32 for INGRAM and all the baseline methods. ... We tuned INGRAM with 10 negative samples, d {32, 64, 128, 256}, bd {128, 256}, L {1, 2, 3}, b L {2, 3, 4}, K {8, 16}, b K {8, 16}, γ {1.0, 1.5, 2.0, 2.5}, B {1, 5, 10} and the learning rate {0.0005, 0.001}.