Exploring Effective Inter-Encoder Semantic Interaction for Document-Level Relation Extraction
Authors: Liang Zhang, Zijun Min, Jinsong Su, Pei Yu, Ante Wang, Yidong Chen
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on four benchmark datasets prove the effectiveness of our model. |
| Researcher Affiliation | Academia | School of Informatics, Xiamen University, China Key Laboratory of Digital Protection and Intelligent Processing of Intangible Cultural Heritage of Fujian and Taiwan (Xiamen University), Ministry of Culture and Tourism, China |
| Pseudocode | No | The paper describes methods and equations but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code is available at https://github.com/ Deep Learn XMU/Doc RE-BSI. |
| Open Datasets | Yes | We evaluate our model on four commonly-used datasets: Doc RED [Yao et al., 2019], Revisit-Doc RED [Huang et al., 2022], Re-Doc RED [Tan et al., 2022b], DWIE [Zaporojets et al., 2021]. |
| Dataset Splits | Yes | Doc RED [Yao et al., 2019] is a large-scale document-level RE dataset with 96 predefined relations... It contains 5,053 documents, which is divided into 3,053 documents for training, 1,000 for development, and 1,000 for test. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU, CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software like 'Huggingface’s Transformers' and 'PyTorch' but does not specify their version numbers, which are required for a reproducible description of ancillary software. |
| Experiment Setup | Yes | We use AdamW [Loshchilov and Hutter, 2019] as our optimizer, which is equipped with a weight decay of 1e-4 and a linear warmup [Goyal et al., 2017] for the first 6% training steps. ... α and β are hyper-parameters, which are empirically set to 0.1 and 0.01, respectively. |