Keep the Structure: A Latent Shift-Reduce Parser for Semantic Parsing
Authors: Yuntao Li, Bei Chen, Qian Liu, Yan Gao, Jian-Guang Lou, Yan Zhang, Dongmei Zhang
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted empirical studies on two datasets across different domains and different types of logical forms. The results demonstrate that the proposed method significantly improves the performance of semantic parsing, especially on unseen scenarios. |
| Researcher Affiliation | Collaboration | Yuntao Li1 , Bei Chen2, Qian Liu3 , Yan Gao2, Jian-Guang Lou2, Yan Zhang1, Dongmei Zhang2 1Peking University 2Microsoft Research 3Beihang University {li.yt, zhyzhy001}@pku.edu.cn; qian.liu@buaa.edu.cn {beichen, yan.gao, jlou, dongmeiz}@microsoft.com |
| Pseudocode | No | The paper describes the steps of its algorithms (e.g., the shift-reduce process) in paragraph form and through illustrations, but it does not include formally labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing the source code for the methodology or provide a link to a code repository. |
| Open Datasets | Yes | We conducted the experiments on two datasets, i.e. the Geoquery dataset and the Complex Web Questions (Complex WQ) datasets. They have a large diversity in topic, query length, and type of LF representation, which challenges the flexibility and generality of our proposed LASP. Geoquery. This dataset collects 880 NL queries about U.S. geography with the corresponding Functional Query Language (Fun QL) meaning representations [Zettlemoyer and Collins, 2012]. Complex WQ. This dataset contains NL questions and their SPARQL logical forms on Freebase [Talmor and Berant, 2018]. |
| Dataset Splits | Yes | Geoquery. This dataset collects 880 NL queries about U.S. geography with the corresponding Functional Query Language (Fun QL) meaning representations [Zettlemoyer and Collins, 2012]. 600 of them are split out as training set with the rest 280 examples as test set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments, only mentioning general model types. |
| Software Dependencies | No | The paper mentions using 'Stanford NLP' and neural network components like 'GRU' and 'MLP', but it does not specify any software names with version numbers needed for replication (e.g., specific library versions for PyTorch/TensorFlow, or the version of Stanford NLP). |
| Experiment Setup | Yes | The basic encoder-decoder model with attention is used for the base parser, each of whom has a single bi-directional GRU hidden layer with the hidden dimension being 512. For the shift-reduce splitter, we search the word embedding dimension in the range [50, 100, 300, 512], hidden size in the range [128, 256, 512] and select the best hyperparameters as marked in bold for experiments. |