Unified Named Entity Recognition as Word-Word Relation Classification

Authors: Jingye Li, Hao Fei, Jiang Liu, Shengqiong Wu, Meishan Zhang, Chong Teng, Donghong Ji, Fei Li10965-10973

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform extensive experiments on 14 widely-used benchmark datasets for flat, overlapped, and discontinuous NER (8 English and 6 Chinese datasets), where our model beats all the current top-performing baselines, pushing the state-of-the-art performances of unified NER.
Researcher Affiliation Academia 1 Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, China 2 Institute of Computing and Intelligence, Harbin Institute of Technology (Shenzhen), China
Pseudocode No The paper describes its architecture and formulations but does not include any pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github. com/ljynlp/W2NER.
Open Datasets Yes To evaluate our framework for three NER subtasks, we conducted experiments on 14 datasets. Flat NER Datasets We adopt Co NLL-2003 (Sang and Meulder 2003) and Onto Notes 5.0 (Pradhan et al. 2013b) in English, Onto Notes 4.0 (Weischedel et al. 2011), MSRA (Levow 2006), Weibo (Peng and Dredze 2015; He and Sun 2017), and Resume (Zhang and Yang 2018) in Chinese. [...] Overlapped NER Datasets We conduct experiments on ACE 2004 (Doddington et al. 2004), ACE 2005 (Walker et al. 2011), GENIA (Kim et al. 2003). [...] Discontinuous NER Datasets We experiment on three datasets for discontinuous NER, namely CADEC (Karimi et al. 2015), Sh ARe13 (Pradhan et al. 2013a) and Sh ARe14 (Mowery et al. 2014).
Dataset Splits Yes For GENIA, we follow Yan et al. (2021) to use five types of entities and split the train/dev/test as 8.1:0.9:1.0. For ACE 2004 and ACE 2005 in English, we use the same data split as Lu and Roth (2015); Yu et al. (2020). For ACE 2004 and ACE 2005 in Chinese, we split the train/dev/test as 8.0:1.0:1.0. We use the preprocessing scripts provided by Dai et al. (2020) for data splitting.
Hardware Specification No The paper mentions using BERT and Bi-LSTM but does not specify any hardware details such as GPU models, CPU types, or other computing resources used for the experiments.
Software Dependencies No The paper mentions using BERT and Bi-LSTM, but it does not specify any software versions for libraries, frameworks, or programming languages (e.g., PyTorch version, Python version, CUDA version).
Experiment Setup No The paper states: "We employ the same experimental settings in previous work (Lample et al. 2016; Yan et al. 2021; Ma et al. 2020; Li et al. 2020b)." However, it does not explicitly provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations within the main text.