Text2NKG: Fine-Grained N-ary Relation Extraction for N-ary relational Knowledge Graph Construction

Authors: Haoran Luo, Haihong E, Yuhao Yang, Tianyu Yao, Yikai Guo, Zichen Tang, Wentai Zhang, Shiyao Peng, Kaiyang Wan, Meina Song, Wei Lin, Yifan Zhu, Anh Tuan Luu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results demonstrate that Text2NKG achieves state-of-the-art performance in F1 scores on the fine-grained n-ary relation extraction benchmark. Our code and datasets are publicly available1.
Researcher Affiliation Collaboration 1School of Computer Science, Beijing University of Posts and Telecommunications, China 2School of Automation Science and Electrical Engineering, Beihang University, China 3Beijing Institute of Computer Technology and Application 4Inspur Group Co., Ltd., China 5College of Computing and Data Science, Nanyang Technological University, Singapore
Pseudocode No The paper describes its methods through text, mathematical equations, and figures, but does not include explicit pseudocode or algorithm blocks.
Open Source Code Yes Our code and datasets are publicly available1. 1 https://github.com/LHRLAB/Text2NKG
Open Datasets Yes The existing fine-grained n-ary RE dataset is Hyper RED [5] only in hyper-relational schema with annotated extracted entities. Therefore, we expand the Hyper RED dataset to four schemas as standard fine-grained n-ary RE benchmarks and conduct experiments on them. Our code and datasets are publicly available1.
Dataset Splits Yes Table 1: Dataset statistics, where the columns indicate the number of entities, relations with four schema, sentences and n-ary relational facts in all sets, train set, dev set, and test set, respectively.
Hardware Specification Yes All experiments were done on a single NVIDIA A100 GPU
Software Dependencies No The paper mentions using “BERT-based Encoder” and “Adam optimizer”, but does not specify version numbers for programming languages or specific software libraries like Python, PyTorch, or TensorFlow.
Experiment Setup Yes We train 10 epochs on Hyper RED using the Adam optimizer. Appendix E shows Text2NKG s optimal hyperparameter settings. Table 4: Hyperparameter Selection. α {1.0, 0.1, 0.01, 0.001} Train batch size {2, 4, 8, 16} Eval batch size {1} Learning rate {1e-5, 2e-5, 5e-5} Max sequence length {128, 256, 512, 1024} Weight decay {0.0, 0.1, 0.2, 0.3}