Document-level Event Factuality Identification via Reinforced Multi-Granularity Hierarchical Attention Networks
Authors: Zhong Qian, Peifeng Li, Qiaoming Zhu, Guodong Zhou
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on DLEF-v2 corpus show that RMHAN model outperforms several state-of-the-art baselines and achieves the best performance. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Soochow University, Suzhou, China 2AI Research Institute, Soochow University, Suzhou, China qianzhongqz@163.com, {pfli, qmzhu, gdzhou}@suda.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | For more details of the resources and reproducibility of this paper, please refer to https://github.com/qz011/rmhan. |
| Open Datasets | Yes | Derived from DLEF [Qian et al., 2019], DLEF-v2 corpus, whose statistics are presented in Table 1, is employed as the benchmark dataset to evaluate our models. |
| Dataset Splits | Yes | 10-fold cross validation are performed on English and Chinese sub-corpus. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using GloVe, Adam, and SGD but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | No | The paper mentions using warm start, Adam, and SGD for optimization, but it does not provide specific hyperparameters such as learning rate, batch size, or number of epochs for the experimental setup. |