Continual Relation Extraction via Sequential Multi-Task Learning

Authors: Thanh-Thien Le, Manh Nguyen, Tung Thanh Nguyen, Linh Ngo Van, Thien Huu Nguyen

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments on multiple datasets, CREST demonstrates significant improvements in CRE performance as well as superiority over other state-of-the-art Multi-task Learning frameworks... Experimental Results Datasets & Settings We evaluate our proposed method and all baselines on two English datasets: Few Rel (Han et al. 2018) dataset comprises 80 relation types and contains a total of 56,000 samples. ... TACRED (Zhang et al. 2017) dataset presents an imbalanced scenario for relation extraction (RE) with 42 relations, including the no relation class, and a total of 106,264 samples.
Researcher Affiliation Collaboration 1Vin AI Research, Vietnam 2Hanoi University of Science and Technology, Vietnam 3University of Michigan, USA 4University of Oregon, USA
Pseudocode Yes Algorithm 1: Adaptive Unified Gradient Descent for CRE
Open Source Code No The paper does not provide an explicit statement about the release of source code or a link to a code repository for the described methodology.
Open Datasets Yes We evaluate our proposed method and all baselines on two English datasets: Few Rel (Han et al. 2018) dataset comprises 80 relation types and contains a total of 56,000 samples. ... TACRED (Zhang et al. 2017) dataset presents an imbalanced scenario for relation extraction (RE) with 42 relations, including the no relation class, and a total of 106,264 samples.
Dataset Splits Yes In line with Wang et al. s paper (2019), this paper adopts the same configurations and utilizes the original training set and validation set as the foundation for conducting experiments.
Hardware Specification Yes Computing infrastructure: Single NVIDIA A100 40GB.
Software Dependencies Yes PyTorch 2.0.0+cu117 and Huggingface Transformer 4.33.0 are used to implement the models.
Experiment Setup Yes Batch size: 16, similar to CRL (Zhao et al. 2022) 1Learning rate: {10 5, 2 10 5, 10 4} 1Number of embeddings training epoch: {10, 20, 50} 1Number of classifier training epoch: {100, 300, 500} 1Number of GMM components: {1, 3, 5} 1Number of GMM samples: {64, 128, 256, 512}