Multi-view Inference for Relation Extraction with Uncertain Knowledge
Authors: Bo Li, Wei Ye, Canming Huang, Shikun Zhang13234-13242
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiment results show that our model achieves competitive performances on both sentenceand document-level relation extraction, which verifies the effectiveness of introducing uncertain knowledge and the multiview inference framework that we design. |
| Researcher Affiliation | Academia | 1National Engineering Research Center for Software Engineering, Peking University 2School of Software and Microelectronics, Peking University 3Beijing University of Posts and Telecommunications deepblue.lb@stu.pku.edu.cn, wye@pku.edu.cn, huangzzk@bupt.edu.cn, zhangsk@pku.edu.cn |
| Pseudocode | No | The paper describes the architecture and processes textually and with diagrams, but it does not include any explicit pseudocode blocks or algorithms. |
| Open Source Code | No | The paper states: "We have released Pro Base Desp 2 to facilitate further research. 2https://github.com/pkuserc/AAAI2021-MIUK-RelationExtraction". However, Pro Base Desp is described as a "corpus" or "external knowledge resource" in the paper's contribution section, not the source code for the proposed MIUK methodology itself. Therefore, this link is to a dataset, not the implemented source code for their method. |
| Open Datasets | Yes | For document-level relation extraction, we use Doc RED proposed by Yao et al. (2019). For sentence-level relation extraction, we use ACE2005 dataset following Ye et al. (2019). We have released Pro Base Desp 2 to facilitate further research. 2https://github.com/pkuserc/AAAI2021-MIUK-RelationExtraction |
| Dataset Splits | Yes | Doc RED has 3,053 training documents, 1,000 development documents and 1,000 test documents, with 97 relation types (including No Relation ). We use five-fold crossvalidation to evaluate the performance, and we report the precision (P), recall (R) and Micro F1-score (Micro-F1) of the positive instances. |
| Hardware Specification | No | The paper mentions using "uncased BERT-base" and states the "size of each word embedding is 768" but does not provide any specific details about the hardware (e.g., GPU model, CPU type, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using "Uncased BERT-Base (Devlin et al. 2019)" but does not provide specific version numbers for BERT or any other software libraries, frameworks, or dependencies used in the experiments. |
| Experiment Setup | Yes | We experiment with the following values of hyper-parameters: 1) the learning rate lr BERT and lr Other for BERT and other parameters {1 10 3, 1 10 4, 1 10 5}; 2) the size of input vector, entity description vector and concept representation {50, 100, 150, 200}; 3) the size of distance embedding {5, 10, 20, 30}; 4) batch size {4, 8, 12, 16, 20, 24}; and 5) dropout ratio {0.1, 0.2, 0.3, 0.4, 0.5}. We tune the hyper-parameters on the development set, and we evaluate the performance on the test set. Table 1 lists the selected hyper-parameter values in our experiments. |