A Recurrent Model for Collective Entity Linking with Adaptive Features
Authors: Xiaoling Zhou, Yukai Miao, Wei Wang, Jianbin Qin329-336
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the performance of our proposed model on the most popular benchmark datasets for NED, and compare it with several previous state-of-the-art NED systems. 1 |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, UNSW, Australia 2College of Computer Science and Technology, DGUT, China 3Shenzhen Institute of Computer Science, Shenzhen University, China |
| Pseudocode | No | The paper describes the system architecture and its steps in prose and diagrams, but it does not contain formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is released at https://github.com/tjumyk/RMA |
| Open Datasets | Yes | We validate our models on six popular benchmark datasets for NED, and compare it with several previous state-of-the-art NED systems. 1 The statistics are shown in Table 1. AIDA-Co NLL (Hoffart et al. 2011)... |
| Dataset Splits | Yes | Following previous work, we train our model on AIDA-train set, tune hyperparameters on AIDA-A set, and test on AIDA-B set (in-domain test) and other datasets (out-domain test). |
| Hardware Specification | Yes | Our model takes 10 minutes for training a local model or 15 minutes for training a global model on AIDA-train with a 10-core 3.3GHz CPU, which compares favourably with SOTA deep neural based methods, e.g. (Le and Titov 2018) takes 1.5 hours to train on the same dataset with a single Titan X GPU and (Ganea and Hofmann 2017) needs 16 hours in the same setting. |
| Software Dependencies | No | The paper mentions using 'XGBoosting' but does not specify version numbers for this or any other software dependencies. |
| Experiment Setup | Yes | In candidate entity generation, we select top-50 candidate entities from the dictionary according to the entity prior. We adopt XGBoosting with rank:pairwise objective, and set the n estimators to 4900, and max depth to 6 according to the parameter tuning on AIDA-A set, and the iteration number T for global model is 4. |