Boundary Enhanced Neural Span Classification for Nested Named Entity Recognition

Authors: Chuanqi Tan, Wei Qiu, Mosha Chen, Rui Wang, Fei Huang9016-9023

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our approach outperforms all existing methods and achieves 85.3, 83.9, and 78.3 scores in terms of F1 on the ACE2004, ACE2005, and GENIA datasets, respectively.
Researcher Affiliation Industry Alibaba Group {chuanqi.tcq, qiuwei.cw, chenmosha.cms, masi.wr, f.huang}@alibaba-inc.com
Pseudocode No Not found.
Open Source Code No The paper does not provide an explicit statement or link to open-source code for the described methodology.
Open Datasets Yes We conduct experiments on three standard benchmark datasets as ACE2004, ACE2005 (Doddington et al. 2004), and GENIA (Kim et al. 2003) datasets.
Dataset Splits Yes The statistics of these datasets are shown in Table 1. (Table 1 includes columns for Train, Dev, and Test splits with corresponding numbers of sentences and entities for each dataset: ACE2004, ACE2005, GENIA).
Hardware Specification No The paper does not specify the hardware used for experiments (e.g., specific GPU/CPU models, memory, or cloud instances).
Software Dependencies No The paper mentions several tools and models like "Glo Ve embeddings", "Adam", "BERTBASE model", and "dropout", but does not provide specific version numbers for software dependencies such as Python, PyTorch, or TensorFlow.
Experiment Setup Yes For the LSTM encoder, we use 300-dimensional uncased pre-trained Glo Ve embeddings... The size of character embedding and part-of-speech embedding are set to 50. The hidden vector length is set to 150. The model is optimized using Adam... with the learning rate of 0.002. For BERT encoder, we use the BERTBASE model... The hidden vector length is 768... We use Adam optimizer with the learning rate of 3e-5. In addition, we also apply 0.2 dropout... w is set to 0.5 for both the LSTM and BERT encoder.