Relational Triple Extraction: One Step is Enough

Authors: Yu-Ming Shang, Heyan Huang, Xin Sun, Wei Wei, Xian-Ling Mao

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on two widely used datasets demonstrate that the proposed model performs better than the state-of-the-art baselines.
Researcher Affiliation Academia Yu-Ming Shang1 , Heyan Huang1 , Xin Sun1 , Wei Wei2 and Xian-Ling Mao1 1School of Computer Science & Technology, Beijing Institute of Technology, Beijing, China 2Huazhong University of Science and Technology, Hu bei, China
Pseudocode No The paper describes the proposed method in text and with a system architecture diagram (Figure 2), but does not contain structured pseudocode or an algorithm block.
Open Source Code No The paper does not include an explicit statement about releasing its own source code or a direct link to a code repository for the methodology described.
Open Datasets Yes We conduct experiments on two widely used relational triple extraction benchmarks: NYT [Riedel et al., 2010] and Web NLG [Gardent et al., 2017].
Dataset Splits Yes Table 1: Statistics of datasets. ... NYT 56,195 4,999 5,000 ... Web NLG 5,019 500 703
Hardware Specification Yes All experiments are conducted with a RTX 3090 GPU.
Software Dependencies No The paper mentions using a specific BERT model ('cased base version of BERT') but does not provide version numbers for general software dependencies such as programming languages or deep learning frameworks.
Experiment Setup Yes The dimension of token representation hi is d = 768. The dimension of projected entity representations de is set to 900. During training, the learning rate is 1e-5, and the batch size is set to 8 on NYT and NYT, 6 on Web NLG and Web NLG. The max length of candidate entities C is 9/6/12/21 on NYT /Web NLG /NYT/Web NLG respectively. For each sentence, we randomly select nneg = 100 negative entities from E to optimize the objective function of a minibatch. During inference, we predict links for all candidate entities and the max length C is 7/6/11/20 on NYT /Web NLG /NYT/Web NLG respectively.