Recall, Retrieve and Reason: Towards Better In-Context Relation Extraction

Authors: Guozheng Li, Peng Wang, Wenjun Ke, Yikai Guo, Ke Ji, Ziyu Shang, Jiajun Liu, Zijie Xu

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on different LLMs and RE datasets demonstrate that our method generates relevant and valid entity pairs and boosts ICL abilities of LLMs, achieving competitive or new state-of-the-art performance on sentence-level RE compared to previous supervised fine-tuning methods and ICL-based methods.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Southeast University 2Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education 3Beijing Institute of Computer Technology and Application
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for open-sourcing the code for the described methodology.
Open Datasets Yes We evaluate RE4 on Sem Eval 2010 [Hendrickx et al., 2019], TACRED [Zhang et al., 2017], Google RE 1, Sci ERC [Luan et al., 2018], four commonly used RE datasets 2. 1https://github.com/google-research-datasets/relationextraction-corpus
Dataset Splits Yes The statistics of datasets are shown in Table 1. Dataset #Relation #Train #Dev #Test Sem Eval 9 6,507 1,493 2,717 TACRED 41 68,124 22,631 15,509 Google RE 5 38,112 9,648 9,616 Sci ERC 7 3,219 455 974
Hardware Specification No The paper does not provide any specific details about the hardware used for running experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper mentions models like T5, BART, LLaMA, and LoRA, but does not specify version numbers for general software dependencies or libraries (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes We set the rank r of the Lo RA parameters to 8 and the merging ratio α to 32. We train RE4 for 5 epochs with batch size 4 and learning rate 1e-4. For the number of generated entity pairs, we set k to 5.