Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Adaptive Prompting for Continual Relation Extraction: A Within-Task Variance Perspective

Authors: Minh Le, Tien Ngoc Luu, An Nguyen The, Thanh-Thien Le, Trang Nguyen, Tung Thanh Nguyen, Linh Ngo Van, Thien Huu Nguyen

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments validate the efficacy of our approach, demonstrating superior performance over state-of-the-art prompt-based and rehearsal-free methods in continual relation extraction. [...] Table 1 summarizes the performance of all methods on Few Rel and TACRED datasets.
Researcher Affiliation Collaboration Minh Le 1* , Tien Ngoc Luu 2 , An Nguyen The 3 , Thanh-Thien Le 1 , Trang Nguyen 1 , Tung Thanh Nguyen 4, Linh Ngo Van 2 , Thien Huu Nguyen 5 1Vin AI Research 2Hanoi University of Science and Technology 3FPT Software AI Center 4Moreh Inc. 5University of Oregon, Eugene, Oregon, USA
Pseudocode Yes For a detailed overview of the training process, please refer to Algorithm 1.
Open Source Code No The paper does not contain an explicit statement about releasing the source code or a link to a repository for the described methodology.
Open Datasets Yes To evaluate the effectiveness of WAVE-CRE and the baseline models, we utilize two popular datasets: Few Rel (Han et al. 2018) [...] TACRED (Zhang et al. 2017)
Dataset Splits Yes Few Rel (Han et al. 2018) contains 80 relation types with a total of 56,000 samples. Following the configurations outlined in Wang et al. (2019), we split it into 10 non-overlapping sub-datasets. TACRED (Zhang et al. 2017) consists of 42 relations and 106,264 samples. We adopt the experimental settings proposed by Cui et al. (2021) to partition the dataset into 10 distinct sub-datasets.
Hardware Specification Yes In this work, we used a single NVIDIA A100 for all methods.
Software Dependencies No The paper mentions using BERT (Devlin et al. 2019) as the encoder but does not specify version numbers for other key software components or libraries required for reproduction.
Experiment Setup No The paper states: "We tune the hyperparameters for the proposed model using random search. We maintained a consistent size for the prompt pool M across all tasks. For baselines, we follow the identical experimental settings employed by Zhao et al. (2022) to ensure fair comparisons." However, it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) for their proposed model, only general statements about tuning or referring to other papers for baselines.