Curriculum-Meta Learning for Order-Robust Continual Relation Extraction

Authors: Tongtong Wu, Xuekai Li, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Yujin Zhu, Guoqiang Xu10363-10369

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our comprehensive experiments on three benchmark datasets show that our proposed method outperforms the state-of-the-art techniques.
Researcher Affiliation Collaboration 1School of Computer Science and Engineering, Southeast University, Nanjing, China 2Faculty of Information Technology, Monash University, Melbourne, Australia 3Gamma Lab, Ping An One Connect, Shanghai, China
Pseudocode Yes Algorithm 1: Curriculum-Meta Learning
Open Source Code Yes The code is available at https://github.com/wutong8023/AAAI-CML.
Open Datasets Yes We conduct our experiments on three datasets, including Continual-Few Rel, Continual-Simple Questions, and Continual-TACRED, which were introduced in (?).
Dataset Splits No The paper mentions forming training and testing sets but does not specify a separate validation split or its details.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running experiments are mentioned in the paper.
Software Dependencies No The paper mentions the Adam optimizer but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup No The paper states 'see Appendix B for hyperparameters', indicating that specific experimental setup details are not provided in the main text.