Inductive Relation Prediction Using Analogy Subgraph Embeddings

Authors: Jiarui Jin, Yangkun Wang, Kounianhua Du, Weinan Zhang, Zheng Zhang, David Wipf, Yong Yu, Quan Gan

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our model consistently outperforms existing models in terms of heterogeneous graph based recommendation as well as knowledge graph completion. We also empirically demonstrate the capability of our model in generalizing to new relation types while producing explainable heat maps of attention scores across the discovered logics.
Researcher Affiliation Collaboration Jiarui Jin1 , Yangkun Wang1 , Kounianhua Du1 , Weinan Zhang1 , Quan Gan2 , Zheng Zhang2, Yong Yu1, David Wipf2 1Shanghai Jiao Tong University, 2Amazon {jinjiarui97,espylacopa,774581965,wnzhang,yyu}@sjtu.edu.cn, {quagan,zhaz,daviwipf}@amazon.com
Pseudocode Yes Algorithm 1: Graph ANGEL
Open Source Code No No explicit statement or link providing access to open-source code for the methodology was found.
Open Datasets Yes We evaluate our model on four heterogeneous graph benchmark datasets in various fields: Last FM (Hu et al., 2018a), Yelp (Hu et al., 2018b), Amazon (Ni et al., 2019), and Douban Book (Zheng et al., 2017). [...] We compare different methods on two benchmark datasets: FB15k-237 (Toutanova & Chen, 2015) and WN18RR (Dettmers et al., 2018)
Dataset Splits Yes We split each dataset into 60%, 20%, and 20% for training, validation and test sets, respectively.
Hardware Specification Yes The models are trained under the same hardware settings with an Amazon EC2 p3.8 large instance1, where the GPU processor is NVIDIA Tesla V100 processor and the CPU processor is Intel Xeon E5-2686 v4 (Broadwell) processor. 1Detailed setting of AWS E2 instance can be found at https://aws.amazon.com/ec2/ instance-types/?nc1=h_ls.
Software Dependencies No No specific software dependencies with version numbers were explicitly provided, only general libraries or frameworks like Deep Graph Library (DGL) and PyTorch without versions.
Experiment Setup Yes The dimension size of the embeddings is set to 128, and the learning rate space is {0.001, 0.0001}. The dropout of input is set to 0.6, of edge is set to 0.3. [...] The number of target subgraphs K is set as 32 and the number of analogy subgraphs Q (for both supporting and refuting patterns) is set as 64.