Prototypical Representation Learning for Relation Extraction

Authors: Ning Ding, Xiaobin Wang, Yao Fu, Guangwei Xu, Rui Wang, Pengjun Xie, Ying Shen, Fei Huang, Hai-Tao Zheng, Rui Zhang

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In order to evaluate the performance of prototypical metrics learning, we conduct extensive experiments on three tasks: supervised relation learning, few-shot learning and our proposed fuzzy relation learning evaluation. We make a comprehensive evaluation and analysis of our method, as well as full-scale comparisons between our work with previous state-of-the-art methods.
Researcher Affiliation Collaboration 1Tsinghua University, 2Alibaba Group, 3The University of Edinburgh, 4Sun Yat-sen University
Pseudocode Yes Algorithm 1 Training for Supervised Relation Extraction; Algorithm 2 Training for Few-shot Relation Extraction; Algorithm 3 Training for Fuzzy Relation Evaluation
Open Source Code Yes The source code of the paper will be released at https://github.com/ Alibaba-NLP/Proto RE.
Open Datasets Yes We use a large few-shot relation learning dataset Few Rel (Han et al., 2018b) in this task. We use classic benchmark dataset Sem Eval 2010 Task 8 (Hendrickx et al., 2010) as the dataset for supervised relation learning.
Dataset Splits Yes We utilize the official evaluation setting which splits the 100 relations into 64, 16, 20 relations for training, validation, and testing respectively. For model selection, we randomly sample 1500 instances from the official training data as the validation set.
Hardware Specification Yes All the experiments run on NVIDIA Tesla V100 GPUs.
Software Dependencies No We use Py Torch (Paszke et al., 2019) framework to implement our model. (PyTorch is named, but no version number is given. BERT is also mentioned without a version.)
Experiment Setup Yes The hidden size is set to 768 and the batch size is set to 60. We train the model for 5 epochs in each experiment. We select Adam W (Loshchilov & Hutter, 2018) with the learning rate of 1e 5 for optimization. The batch size for few-shot training is 4, the training step is 20000 and the learning rate is 2e-5 with Adam W. The batch size is set to 20, the training epoch is 30, the learning rate is 1e-5 with Adam (Kingma & Ba, 2014).