Interactive Information Extraction by Semantic Information Graph

Authors: Siqi Fan, Yequan Wang, Jing Li, Zheng Zhang, Shuo Shang, Peng Han

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our Inter IE achieves state-of-the-art performance on all IE subtasks on the benchmark dataset (i.e., ACE05-E+ and ACE05-E). More importantly, the proposed model is not sensitive to the decoding order, which goes beyond the limitations of AMR based methods.
Researcher Affiliation Collaboration Siqi Fan 1 , Yequan Wang 2 , Jing Li3 , Zheng Zhang4 , Shuo Shang 1 and Peng Han 5 1University of Electronic Science and Technology of China, Chengdu, China 2Beijing Academy of Artificial Intelligence, Beijing, China 3Intelligence, Abu Dhabi, United Arab Emirates 4Department of Computer Science and Technology, Tsinghua University, Beijing, China 5Aalborg University
Pseudocode Yes Algorithm 1 SIG generation Input: Sentence S = w1, ..., wn Output: SIG G(V, E)
Open Source Code Yes Our code is released to support research1. 1https://github.com/LucyFann/InterIE
Open Datasets Yes We conduct experiments on the Automatic Content Extraction (ACE) 2005 dataset, which provides the entity, relation and event annotations. Following the previous works of [Wadden et al., 2019; Lin et al., 2020], we use two different versions of ACE05 for the Joint IE task, i.e., ACE05-E and ACE05-E+.
Dataset Splits Yes Table 1: The statistics of ACE05-E and ACE05-E+ ... ACE05-E Train 17,172 ... Dev 923 ... Test 832 ... ACE05-E+ Train 19,240 ... Dev 902 ... Test 676
Hardware Specification Yes We train our model with Adam [Kingma and Ba, 2015] on NVIDIA 3090 with a learning rate 2e-5 for Ro BERTa parameters and 4e-4 for others.
Software Dependencies No The paper mentions using 'RoBERTa' and 'Adam' but does not specify version numbers for general software dependencies like Python, PyTorch, TensorFlow, or other libraries. It only lists specific parameters for training (e.g., learning rate).
Experiment Setup Yes We train our model with Adam [Kingma and Ba, 2015] on NVIDIA 3090 with a learning rate 2e-5 for Ro BERTa parameters and 4e-4 for others. The number of epoch is 100. To get a fair comparison, we use the same settings of other parameters with [Lin et al., 2020; Zhang and Ji, 2021].