End-to-End Entity Linking with Hierarchical Reinforcement Learning

Authors: Lihan Chen, Tinghui Zhu, Jingping Liu, Jiaqing Liang, Yanghua Xiao

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to show that the proposed method achieves state-of-the-art performance in several EL benchmark datasets.
Researcher Affiliation Collaboration Lihan Chen1, Tinghui Zhu1, Jingping Liu2, Jiaqing Liang3, Yanghua Xiao1, 4* 1 Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University 2 East China University of Science and Technology, Shanghai, China 3 School of Data Science, Fudan University, China 4 Fudan-Aishu Cognitive Intelligence Joint Research Center
Pseudocode Yes Algorithm 1: Hierarchical Policy Optimization for EL
Open Source Code Yes Our code is publicly available at https://github.com/lhlclhl/he2eel.
Open Datasets Yes We use the standard English AIDA-Co NLL splits (Hoffart et al. 2011) for training, validation, and in-domain test.
Dataset Splits Yes We use the standard English AIDA-Co NLL splits (Hoffart et al. 2011) for training, validation, and in-domain test.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions software components like Longformer, LSTM, and BART, but does not specify their version numbers or other software dependencies with specific versions.
Experiment Setup No The detailed settings including dataset statistics, training details and hyper-parameters settings are presented in supplementary materials.