Attention Based Document-level Relation Extraction with None Class Ranking Loss
Authors: Xiaolong Xu, Chenbin Li, Haolong Xiang, Lianyong Qi, Xuyun Zhang, Wanchun Dou
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on benchmarking datasets demonstrate that our proposed method outperforms the state-of-the-art baselines with higher accuracy. |
| Researcher Affiliation | Academia | Xiaolong Xu1,2 , Chenbin Li1 , Haolong Xiang1,2 , Lianyong Qi3 , Xuyun Zhang4 and Wanchun Dou5 1School of Software, Nanjing University of Information Science & Technology, China 2Jiangsu Province Engineering Research Center of Advanced Computing and Intelligent Services, China 3College of Computer Science and Technology, China University of Petroleum, China 4School of Computing, Macquarie University, Australia 5Department of Computer Science and Technology, Nanjing University, China |
| Pseudocode | No | The paper describes the methodology with textual explanations and a framework diagram (Figure 2), but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The sourcecode is available at https://github.com/sugarman490/Document-RE. |
| Open Datasets | Yes | The experiments are conducted on two widely used datasets, shown as follows: Doc RED [Yao et al., 2019] is a common document-level RE dataset... Re-Doc RED [Tan et al., 2022b] is another revised version of Doc RED. |
| Dataset Splits | Yes | We report the mean and standard deviation on the development set by conducting five experiments with different random seeds. |
| Hardware Specification | No | The paper uses BERT-Base-Cased as the text encoder but does not specify any hardware details like GPU or CPU models, memory, or cloud instances used for experimentation. |
| Software Dependencies | No | We use BERT-Base-Cased [Kenton and Toutanova, 2019] as the text encoder. While BERT is a specific model, it doesn't specify the underlying software (e.g., TensorFlow, PyTorch) or its version used for implementation, nor other common libraries with versions. |
| Experiment Setup | Yes | We use BERT-Base-Cased [Kenton and Toutanova, 2019] as the text encoder. The dimension of embedded relation representation is set as 768. The learning rate for BERT parameters is 5π 5 and π 4 for other layers. In the relation correlation module, we set the threshold πto be 10 for filtering noisy co-occurred relations, and πΏis set to be 0.05 in equation (3). We set p to be 0.3 in equation (4). We employ 2-layer GAT networks with k = 2 attention heads computing 500 hidden features per head. The number of attention heads in the fusion module is simply set as 4. |