Document-level Relation Extraction via Subgraph Reasoning
Authors: Xingyu Peng, Chong Zhang, Ke Xu
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on Doc RED show that SGR outperforms existing models, and further analyses demonstrate that our method is both effective and explainable. Our code is available at https://github.com/Crysta1ovo/SGR. |
| Researcher Affiliation | Academia | Xingyu Peng , Chong Zhang and Ke Xu State Key Lab of Software Development Environment, Beihang University, Beijing, 100191, China {xypeng, chongzh, kexu}@buaa.edu.cn |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/Crysta1ovo/SGR. |
| Open Datasets | Yes | We evaluate our model on Doc RED, a large-scale humanannotated dataset for document-level RE constructed from Wikipedia and Wikidata. |
| Dataset Splits | Yes | Doc RED contains 3,053 documents for training, 1,000 for development, and 1,000 for testing, involving 96 relation types, 132,275 entities, and 56,354 relational facts. |
| Hardware Specification | No | No specific hardware details (e.g., GPU models, CPU types, or memory) used for running experiments are provided in the paper. |
| Software Dependencies | No | While the paper mentions using GloVe, Bi LSTM, and AdamW, it does not specify the version numbers for these or any other software libraries, environments, or programming languages used. |
| Experiment Setup | Yes | With setting the batch size to 4, we train our model using Adam W [Loshchilov and Hutter, 2019] optimizer, a linear learning rate scheduler with 6% warmup, and a maximum learning rate of 0.01. All hyperparameters are tuned based on the development set. |