Position-aware Joint Entity and Relation Extraction with Attention Mechanism

Authors: Chenglong Zhang, Shuyong Gao, Haofen Wang, Wenqiang Zhang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that our model is effective. With the same pre-trained encoder, our model achieves the new state-of-the-art on standard benchmarks (ACE05, Co NLL04 and Sci ERC), obtaining a 4.7%-17.8% absolute improvement in relation F1.
Researcher Affiliation Academia 1Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China 2Academy for Engineering & Technology, Fudan University, Shanghai, China 3College of Design and Innovation, Tongji University, Shanghai, China
Pseudocode No The paper does not contain pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing open-source code or a link to a code repository for the described methodology.
Open Datasets Yes We use three popular relation extraction datasets: ACE05, Co NLL04 and Sci ERC. Table 2 shows the statistical information of each dataset. The ACE05 dataset consists of English, Arabic, and Chinese data collected from various domains, such as newswire and online forums. [...] For the Co NLL04 dataset, we adopt the training set (1,153 sentences) and test set (288 sentences) split by [Gupta et al., 2016].
Dataset Splits Yes To tune the hyperparameters, 20% of the training set is used as the development set.
Hardware Specification No The paper mentions using pre-trained models like "bert-base-uncased" and "albert-xxlarge-v2" but does not specify the hardware (e.g., GPU, CPU models, memory) used for training or experimentation.
Software Dependencies No The paper mentions the use of pre-trained models like "bert-base-uncased" and "albert-xxlarge-v2", but does not list specific software dependencies with version numbers (e.g., Python version, PyTorch version).
Experiment Setup Yes For datasets ACE05, Co NLL04 and Sci ERC, we set the maximum length of the extended sentences to 256. We consider spans up to L = 8 words. On the data sets ACE05, Co NLL04 and Sci ERC, we take the retention factor λ = 0.05.