Meta-Learning Based Knowledge Extrapolation for Knowledge Graphs in the Federated Setting

Authors: Mingyang Chen, Wen Zhang, Zhen Yao, Xiangnan Chen, Mengxiao Ding, Fei Huang, Huajun Chen

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our proposed method Ma KEr (for Meta-Learning Based Knowledge Extrapolation) on datasets derived from KG benchmarks, and compared it with baselines to show the effectiveness of this model. We report the link prediction results in Table 1, and show the detail results for different kinds of query triples (i.e., u ent, u rel and u both) respectively. We conduct several ablation studies to show the importance of different parts of our proposed model. We visualize the entity embeddings for NELL-Ext produced by our proposed Ma KEr and Asmp-KGE in Fig. 3.
Researcher Affiliation Collaboration Mingyang Chen1 , Wen Zhang2 , Zhen Yao2 , Xiangnan Chen2 , Mengxiao Ding3 , Fei Huang4 and Huajun Chen1,5,6 1College of Computer Science and Technology, Zhejiang University 2School of Software Technology, Zhejiang University 3Huawei Technologies Co., Ltd. 4Alibaba Group 5ZJU-Hangzhou Global Scientific and Technological Innovation Center 6Alibaba-Zhejiang University Joint Institute of Frontier Technologies {mingyangchen, zhang.wen, yz0204, xnchen2020, huajunsir}@zju.edu.cn, dingmengxiao@huawei.com, f.huang@alibaba-inc.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Source code is available at https://github.com/zjukg/MaKEr.
Open Datasets Yes In order to evaluate the ability of a model for knowledge extrapolation in the federated setting, we create two datasets from two standard KG benchmarks, FB15k-237 [Toutanova et al., 2015] and NELL-995 [Xiong et al., 2017], named FB-Ext and NELL-Ext.
Dataset Splits No The paper defines a training KG (Gtr) and a test KG (Gte), with support and query triples within tasks. However, it does not specify a distinct 'validation' dataset split (e.g., percentages or counts) for the overall dataset as part of a train/validation/test partitioning.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running the experiments. It only states that the model is implemented in PyTorch and DGL.
Software Dependencies No Our model is implemented in Py Torch and DGL. The paper mentions the software used but does not provide specific version numbers for PyTorch or DGL.
Experiment Setup Yes For Ma KEr, the dimensions for embeddings and feature representations are 32; we employ the GNN with 2 layers, and the dimension for GNN s hidden representation is 32. The batch size for meta-training is 64, and we use the Adam optimizer with a learning rate of 0.001. Before meta-training our model, we sample 10,000 tasks on the training KG for each dataset, and the details of task sampling can be found in Appendix E. During training, we randomly treat entities and relations as unseen with the ratio of 30% 80% for each task.