Joint Extraction of Entities and Relations Based on a Novel Graph Scheme

Authors: Shaolei Wang, Yue Zhang, Wanxiang Che, Ting Liu

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on New York Times (NYT) corpora show that our approach outperforms the state-of-the-art methods.We conduct our experiments on New York Times (NYT) corpora and the results show our method outperforms the state-of-the-art end-to-end methods.Table 4: Comparison with previous state-of-the-art methods on NYT.
Researcher Affiliation Academia Shaolei Wang1, Yue Zhang 2, Wanxiang Che1, Ting Liu1 1 Center for Social Computing and Information Retrieval, Harbin Institute of Technology, China 2 Singapore University of Technology and Design {slwang, car, tliu}@ir.hit.edu.cn, yue zhang@sutd.edu.sg
Pseudocode No The paper describes transition actions and sequences in tables (Table 1, Table 2, Table 3), which outline the algorithm's steps. However, there is no formally labeled 'Pseudocode' or 'Algorithm' block or figure.
Open Source Code Yes The code is released1. 1https://github.com/hitwsl/joint-entity-relation
Open Datasets Yes To directly compare with Zheng et al. [2017], we use the public dataset NYT2 as our main data set, which is produced by distant supervision [Ren et al., 2017]... 2The dataset can be downloaded at: https://github.com/shanzhenren/CoType.
Dataset Splits Yes We follow Zheng et al. [2017], creating a validation set by randomly sampling 10% data from test set and use the remaining data as evaluation.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cluster specifications) used for running the experiments.
Software Dependencies No The paper mentions using stochastic gradient descent (SGD) and a variant of the skip n-gram model, but does not provide specific version numbers for any software dependencies or libraries (e.g., deep learning frameworks, programming language versions, or other specific packages).
Experiment Setup Yes We update all model parameters by backpropagation using stochastic gradient descent (SGD) with a learning rate of 0.01 and gradient clipping at 5.0. Following Dyer et al. [2015], we use a variant of the skip n-gram model, namely structured skip n-gram [Ling et al., 2015], to create word embeddings. We have 2 hidden layers in our network and the dimensionality of the hidden units is 100.