Entity-Agnostic Representation Learning for Parameter-Efficient Knowledge Graph Embedding

Authors: Mingyang Chen, Wen Zhang, Zhen Yao, Yushan Zhu, Yang Gao, Jeff Z. Pan, Huajun Chen

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that EARL uses fewer parameters and performs better on link prediction tasks than baselines, reflecting its parameter efficiency.
Researcher Affiliation Collaboration 1College of Computer Science and Technology, Zhejiang University 2School of Software Technology, Zhejiang University 3Donghai Laboratory 4Huawei Technologies Co., Ltd. 5School of Informatics, The University of Edinburgh 6Alibaba-Zhejiang University Joint Institute of Frontier Technologies
Pseudocode No The paper describes methods verbally and through equations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The source code is available at https://github.com/zjukg/EARL.
Open Datasets Yes Our model is evaluated on several KG benchmarks with various sizes and characteristics, and the dataset statistics are shown in Table 1. Specifically, FB15k-237 (Toutanova et al. 2015) is derived from Freebase (Bollacker et al. 2008)... WN18RR (Dettmers et al. 2018) is a subset of Word Net (Miller 1995)... Co DEx (Safavi and Koutra 2020)... YAGO3-10 (Mahdisoltani, Biega, and Suchanek 2015)...
Dataset Splits Yes Table 1: Dataset statistics. The number of entities, relations, training triples, validation triples, and test triples.
Hardware Specification Yes We conduct our experiments on NVIDIA RTX 3090 GPUs with 24GB RAM
Software Dependencies No The paper mentions using PyTorch and DGL, but does not provide specific version numbers for these software dependencies. It only cites their original papers.
Experiment Setup Yes For entity-agnostic encoding, we use 2-layer GNNs, and the default number of k for k NRes Ent encoding is 10. We set the number of reserved entities as 10% of the number of all entities for each dataset... For model training, the learning rate is set to 0.001; the batch size is set to 1024; the number of negative samples (i.e., n) is set to 256; the margin is set to 15 for YAGO3-10 and 10 for other datasets.