Chinese NER with Height-Limited Constituent Parsing
Authors: Rui Wang, Xin Xin, Wei Chang, Kun Ming, Biao Li, Xin Fan7160-7167
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on the Onto Notes 4.0 dataset have demonstrated that the proposed model outperforms the state-of-the-art method by 2.79 points in the F1-measure. |
| Researcher Affiliation | Collaboration | Rui Wang,1 Xin Xin,1 Wei Chang,1 Kun Ming,1 Biao Li,2 Xin Fan2 1BJ ER Center of HVLIP&CC, School of Comp. Sci. & Tech., Beijing Institute of Technology, Beijing, China 2Tencent, Beijing, China {ruicn,xxin,2220160514,2120171045}@bit.edu.cn, {biotli,hsinfan}@tencent.com |
| Pseudocode | No | The paper describes algorithms (dynamic programming, pruning) but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide an unambiguous statement or a link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | The experiments are conducted on the dataset of Onto Notes 4.0 (Weischedel et al. 2011). |
| Dataset Splits | Yes | The dataset contains 15,724 sentences in the training set, 4,301 sentences in the development set, and 4,346 sentences in the testing set, with more than 490,000 characters in total. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions using systems like ZPar and Lattice LSTM, but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | Table 1: Hyper-parameters. Some of them are employed from the work of (Zhang and Yang 2018), and others are set according to parameter analysis. |