Domain-Adapted Dependency Parsing for Cross-Domain Named Entity Recognition

Authors: Chenxiao Dou, Xianghui Sun, Yaoshu Wang, Yunjie Ji, Baochang Ma, Xiangang Li

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, through extensive experiments, we show our proposed method can not only effectively take advantage of word-dependency knowledge, but also significantly outperform other Multi-Task Learning methods on cross-domain NER.
Researcher Affiliation Collaboration Chenxiao Dou1, Xianghui Sun2, Yaoshu Wang *3, Yunjie Ji2, Baochang Ma2, Xiangang Li2 1Nanhu Academy of Electronics and Information Technology 2Beike 3Shenzhen Institute of Computing Sciences, Shenzhen University douchenxiao@cnaeit.com, {sunxianghui002,jiyunjie001,mabaochang001,lixiangang002}@ke.com, yaoshuw@sics.ac.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is open-source and available at https://github. com/xianghuisun/DADP.
Open Datasets Yes To evaluate the effectiveness of the proposed method, we conduct experiments on four English NER datasets, including Co NLL03 1, WNUT17 2, Mit Rest 3 and NCBI 4. The four datasets come from four different domains, which are listed in Table 1. In addition, as DP task is taken as the auxiliary task in the proposed method, we adopt Onto Notes 5.0 5 as our DP source dataset, converted to the Stanford dependency-tree format by using Stanford Core NLP (Manning et al. 2014). Detailed statistics of the datasets are listed in Table 1.
Dataset Splits No Table 1 shows the statistics for 'Train' and 'Test' splits (e.g., 'Onto Notes #sentence 59924 8262'), but does not explicitly provide details for a validation split.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software like BERT, Bi LSTM, Adam W, and spaCy, but does not provide specific version numbers for these or any other ancillary software components.
Experiment Setup Yes Hyperparameters. We set the threshold of the maximum epoch as 100 for every model training. To our proposed model, the adopted Bi LSTM module is incorporated with two 768-dimension LSTM layers. Each representation layer after Bi LSTM is introduced with 128 dimensions. For the two biaffine classifiers, the parameters are configured as described in the previous section.With all the datasets, we use the batch size as 16 and the input maximum length as 256. In the training process, Adam W is taken as our optimizer with the learning rate 2e-5.