A Neural Transition-Based Approach for Semantic Dependency Graph Parsing

Authors: Yuxuan Wang, Wanxiang Che, Jiang Guo, Ting Liu

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our parser on the Sem Eval-2016 Task 9 dataset (Chinese) and the Sem Eval-2015 Task 18 dataset (English). On both benchmark datasets, we obtain superior or comparable results to the best performing systems. Our parser can be further improved with a simple ensemble mechanism, resulting in the state-of-the-art performance.
Researcher Affiliation Academia Yuxuan Wang, Wanxiang Che, Jiang Guo, Ting Liu Research Center for Social Computing and Information Retrieval Harbin Institute of Technology, Harbin, China {yxwang, car, jguo, tliu}@ir.hit.edu.cn
Pseudocode Yes Table 1: Transitions defined in the list-based arc-eager algorithm (Choi and Mc Callum 2013). This table presents structured steps for the transition system.
Open Source Code Yes Our system will be publicly available at https://github.com/HITalexwang/lstm-sdparser.
Open Datasets Yes For Chinese, we use the Sem Eval-2016 Task 9 as our testbed. ... For English, we conduct experiments on the English part of Sem Eval-2015 Task 18 closed track (Oepen et al. 2015).
Dataset Splits Yes For English, we conduct experiments on the English part of Sem Eval-2015 Task 18 closed track (Oepen et al. 2015). We use the same data split as previous work (Almeida and Martins 2015; Du et al. 2015), with 33,964 training sentences (WSJ 00-19), 1,692 development sentences ( 20), 1,410 in-domain testing sentences ( 21) and 1,849 out-of-domain testing sentences from the Brown Corpus.
Hardware Specification No The paper mentions using DyNet for implementation but does not specify any hardware details like GPU or CPU models used for experiments.
Software Dependencies No The paper mentions 'Dy Net (Neubig et al. 2017)' as the neural model implementation library but does not provide a specific version number.
Experiment Setup Yes The Stack-LSTMs and Bi-LSTM have two layers while the Tree-LSTM has one. The input and hidden dimensions of Stack-LSTM, Bi-LSTM and Tree-LSTM are 200. The learned word embedding size dwt = 100, POS tag, relation and transition embedding size are all 50.