Document-Level Relation Extraction with Reconstruction
Authors: Wang Xu, Kehai Chen, Tiejun Zhao14167-14175
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on a large-scale Doc RE dataset show that the proposed model can significantly improve the accuracy of relation extraction on a strong heterogeneous graph-based baseline. |
| Researcher Affiliation | Academia | 1Harbin Institute of Technology, Harbin, China 2National Institute of Information and Communications Technology, Kyoto, Japan |
| Pseudocode | No | No explicit pseudocode or algorithm block was found. |
| Open Source Code | Yes | The code is publicly available at https://github.com/xwjim/Doc RE-Rec. |
| Open Datasets | Yes | The proposed methods were evaluated on a largescale human-annotated dataset for document-level relation extraction (Yao et al. 2019). Doc RED contains 3,053 documents for the training set, 1,000 documents for the development set, and 1,000 documents for the test set, totally with 132,375 entities, 56,354 relational facts, and 96 relation types. |
| Dataset Splits | Yes | Doc RED contains 3,053 documents for the training set, 1,000 documents for the development set, and 1,000 documents for the test set |
| Hardware Specification | No | No specific hardware (e.g., GPU models, CPU types, or cloud instance names) used for experiments was mentioned. |
| Software Dependencies | No | The paper mentions 'Glo Ve embedding (100d) and Bi LSTM (128d) as word embedding and encoder', 'Adam as the optimizer', and 'uncased BERT-Based model (768d)'. However, no specific version numbers for these software components or libraries are provided. |
| Experiment Setup | Yes | The hop number L of the encoder was set to 2. The learning rate was set to 1e-4 and we trained the model using Adam as the optimizer. For the BERT representations, we used uncased BERT-Based model (768d) as the encoder and the learning rate was set to 1e-5. For evaluation, we used F1 and Ign F1 as the evaluation metrics. |