Dependency Tree Representations of Predicate-Argument Structures

Authors: Likun Qiu, Yue Zhang, Meishan Zhang

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare the performances of two simple sequence labeling models trained on our treebank with those of a stateof-the-art SRL system on its equivalent PB-style conversion, demonstrating the effectiveness of the novel semantic representation. Table 3 shows the main results of PB-style evaluation. Table 4 shows the results of the CST-style evaluation for the MLN system.
Researcher Affiliation Academia 1School of Chinese Language and Literature, Ludong University, China 2Singapore University of Technology and Design, Singapore qiulikun@pku.edu.cn, yue zhang@sutd.edu.sg
Pseudocode No The paper describes a "transfer algorithm" but does not provide it in the form of pseudocode or a clearly labeled algorithm block.
Open Source Code Yes We make our treebank and the proposition generation script freely available at klcl.pku.edu.cn or www.shandongnlp.com.
Open Datasets Yes Given our framework, a semantic treebank, the Chinese Semantic Treebank (CST), containing 14,463 sentences, is constructed. This corpus is based on the Peking University Multi-view Chinese Treebank (PMT) release 1.0 (Qiu et al. 2014), which is a dependency treebank. We make our treebank and the proposition generation script freely available at klcl.pku.edu.cn or www.shandongnlp.com.
Dataset Splits Yes Sentences 12001-13000 and 13001-14463 are used as the development and test sets, respectively. The remaining sentences are used as the training data.
Hardware Specification No No specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) are provided for running the experiments.
Software Dependencies No The paper mentions using 'Conditional Random Fields', 'Markov Logic Network', and 'MATE-tools' with associated references and URLs, but does not specify exact version numbers for these software packages or any other programming language or library dependencies used for the experiments.
Experiment Setup No The paper describes the features used for sequence labeling in Table 2 but does not provide specific hyperparameters (e.g., learning rate, batch size, number of epochs, optimizer settings) or detailed training configurations.