Relation Extraction with Convolutional Network over Learnable Syntax-Transport Graph

Authors: Kai Sun, Richong Zhang, Yongyi Mao, Samuel Mensah, Xudong Liu8928-8935

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on Semeval-2010 Task 8 and Tacred show our approach outperforms previous methods. In this section, we conduct experiments to validate our model on benchmark datasets.
Researcher Affiliation Academia 1SKLSDE, School of Computer Science and Engineering, Beihang University, Beijing, China 2Beijing Advanced Institution on Big Data and Brain Computing, Beihang University, Beijing, China 3School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada
Pseudocode No No structured pseudocode or algorithm blocks were found.
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper. It only mentions reproducing results from a third-party source code: "However, we returned to the source code provided by (Guo, Zhang, and Lu 2019) to reproduce the result reported for C-AGGCN."
Open Datasets Yes Specifically, we perform experiments on the Semeval-2010 Task 8 (Hendrickx et al. 2010) (Semeval) and the Tacred (Zhang et al. 2017) datasets.
Dataset Splits Yes Table 2: Distribution of splits on benchmark datasets. Dataset Train Dev Test Semeval 7200 800 2717 Tacred 68124 22631 15509
Hardware Specification No No specific hardware details (like GPU/CPU models, memory, or processor types) used for running experiments were provided.
Software Dependencies No The paper mentions 'Glove embeddings (Pennington, Socher, and Manning 2014)', 'Stanford parser', 'SGD optimizer', and 'Bi LSTM' but does not provide specific version numbers for any key software components or libraries.
Experiment Setup Yes We exploit 300-dimensional Glove vectors (Pennington, Socher, and Manning 2014) for the word embeddings, as well as a 30-dimensional part-of-speech (POS) embeddings, 30-dimensional named entity recognition (NER) embeddings, and 30-dimensional dependency relation (DEP) embeddings. We concatenate both word, POS and NER embeddings, and learn a 300-dimensional Bi LSTM embeddings for each word. We randomly dropout 10% of neurons in the first GCN layer, and 10% of neurons in the input layer. Our model is trained for 100 epochs with batch size 50. We use the SGD optimizer with an initial learning rate of 0.7 for all datasets.