AMR Parsing With Cache Transition Systems

Authors: Xiaochang Peng, Daniel Gildea, Giorgio Satta

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results show that a cache transition system can cover almost all AMR graphs with a small cache size, and our end-to-end system achieves competitive results in comparison with other transition-based approaches for AMR parsing.
Researcher Affiliation Academia Xiaochang Peng, Daniel Gildea Department of Computer Science University of Rochester Rochester, NY 14627 {xpeng, gildea}@cs.rochester.edu Giorgio Satta Department of Information Engineering University of Padua Via Gradenigo 6/A, 35131 Padova, Italy satta@dei.unipd.it
Pseudocode No The paper describes the computational model and oracle algorithm in detail through text and an example run (Figure 2), but it does not contain structured pseudocode or algorithm blocks labeled as such.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described, such as a specific repository link or an explicit code release statement.
Open Datasets Yes We evaluate our system on the released dataset (LDC2015E86) for Sem Eval 2016 task 8 on meaning representation parsing (May 2016).
Dataset Splits Yes The dataset contains 16,833 training, 1,368 development and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions 'Stanford Core NLP' and 'Illinois Named Entity Tagger' and 'word2vec' but only provides a version number for 'Smatch (version 2.0.2)'. It does not provide specific version numbers for all key ancillary software components used.
Experiment Setup Yes As for the feedforward classifiers, we use one hidden layer with 200 tanh hidden units and a learning rate of 0.005.