Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training

Authors: Peng Shi, Patrick Ng, Zhiguo Wang, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Cicero Nogueira dos Santos, Bing Xiang13806-13814

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Based on experimental results, neural semantic parsers that leverage GAP MODEL as a representation encoder obtain new state-of-the-art results on both SPIDER and CRITERIA-TO-SQL benchmarks.
Researcher Affiliation Collaboration Peng Shi1 , Patrick Ng2, Zhiguo Wang2, Henghui Zhu2, Alexander Hanbo Li2, Jun Wang2, Cicero Nogueira dos Santos2, Bing Xiang2 1 University of Waterloo, 2 AWS AI Labs peng.shi@uwaterloo.ca, {patricng,zhiguow,henghui,hanboli,juwanga,cicnog,bxiang}@amazon.com
Pseudocode No The paper describes the model architecture and training tasks conceptually but does not include any pseudocode or algorithm blocks.
Open Source Code Yes Our code is public for future work. 2https://github.com/awslabs/gap-text2sql
Open Datasets Yes SPIDER: SPIDER dataset (Yu et al. 2018) is a text-to-SQL dataset with 10,181 annotated parallel utterance-database SQL triples. (...) CRITERIA-TO-SQL: (...) The dataset contains 2003 annotated examples, and the evaluation metrics are the SQL accuracy and execution accuracy.
Dataset Splits Yes After finetuning BART, the model can generate high-quality utterances logically consistent with the input SQL, achieving a 0.1934 BLEU score on the development set. (...) After fine-tuning, the model achieves 0.1821 BLEU score on the development set. (...) Table 2 shows the end-to-end results on the public development set and hidden test set of SPIDER.
Hardware Specification No The paper does not specify the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions models like BART and BERT but does not provide specific version numbers for any software libraries, frameworks, or dependencies.
Experiment Setup Yes In the pre-training, we train our GAP MODEL with the underlying transformers initialized with BART (Lewis et al. 2019) model. During the fine-tuning phase, we only leverage the encoder component of the GAP MODEL with 12-layer transformers as the encoder for the semantic parsers. (...) We use the standard MLM objective, with a masking rate of 35% sub-tokens in the whole input sequence, including the utterance and schema.