Symbolic Priors for RNN-based Semantic Parsing

Authors: Chunyang Xiao, Marc Dymetman, Claire Gardent

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our method on an extension of the Overnight dataset and show that it not only strongly improves over an RNN baseline, but also outperforms non-RNN models based on rich sets of hand-crafted features.
Researcher Affiliation Collaboration Chunyang Xiao Marc Dymetman Xerox Research Centre Europe chunyang.xiao,marc.dymetman@xerox.com Claire Gardent CNRS, LORIA, UMR 7503 claire.gardent@loria.fr
Pseudocode No The paper does not contain pseudocode or a clearly labeled algorithm block.
Open Source Code No The paper provides a GitHub link (https://github.com/chunyangx/overnight) for the extended Overnight+ dataset and mentions using a library from Wilker Aziz (https://github.com/wilkeraziz/pcfg-sampling), but it does not provide a direct link or explicit statement about the open-sourcing of the code for their own proposed methodology.
Open Datasets Yes we release an extended Overnight+ dataset.4 https://github.com/chunyangx/overnight
Dataset Splits Yes First, we group all the data and propose a new split. This split makes a 80%-20% random split over all the LFs and keeps the 20% LFs (together with their corresponding utterances) as test and the remaining 80% as training. For each domain, we also add new named entities into the knowledge base and create a new development set and test set containing those new named entities.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory specifications).
Software Dependencies No The paper mentions using LSTM and MLP components, and references [Xiao et al., 2016] for neural network architecture and a library by Wilker Aziz for intersection algorithms. However, it does not provide specific version numbers for any software libraries or dependencies.
Experiment Setup Yes We concatenate ut and ub and pass the concatenated vector to a two-layer MLP for the final prediction. At test time, we use a uniform-cost search algorithm [Russell and Norvig, 2003] to produce the DS with the highest probability. All the models are trained for 30 epochs.