Aggregating Inter-Sentence Information to Enhance Relation Extraction
Authors: Hao Zheng, Zhoujun Li, Senzhang Wang, Zhao Yan, Jianshe Zhou
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results verify the effectiveness of our method for aggregating information across sentences. Additionally, to further improve the ranking of high-quality extractions, we propose an effective method to rank relations from different entity pairs. This method can be easily integrated into our overall relation extraction framework, and boosts the precision significantly. |
| Researcher Affiliation | Academia | State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China Beijing Advanced Innovation Center for Imaging Technology, Capital Normal University, 100048, China {zhenghao35, lizj, szwang, yanzhao}@buaa.edu.cn, zhoujs@cnu.edu.cn |
| Pseudocode | Yes | Algorithm 1 The learning algorithm for Rank RE-local; Algorithm 2 The learning algorithm for Rank RE-global |
| Open Source Code | No | The paper does not provide a link to its source code or explicitly state that the code for its method is being released. |
| Open Datasets | Yes | We evaluate our method on the KBP dataset developed by Surdeanu et al. (2012). The KBP dataset contains 183,062 training gold relations and 3334 testing gold relations from 41 relation types. |
| Dataset Splits | Yes | In practice, we use the same partition of dataset for tuning and testing as Surdeanu et al. (2012). That is , 40 queries are used for development and 160 queries are used for formal evaluation. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or memory used for the experiments. |
| Software Dependencies | No | The paper does not mention specific software names with version numbers, such as programming languages, libraries, or frameworks used. |
| Experiment Setup | Yes | Our method has two parameters that require tuning: the number of iterations (T) and the value (a) used to control the effect of the number of instances. We tune them using the development queries, and obtain the optimal values T = 7, a = 0.2 for Rank RE-global and T = 11, a = 0.18 for Rank RE-local. |