Neural Semantic Parsing in Low-Resource Settings with Back-Translation and Meta-Learning

Authors: Yibo Sun, Duyu Tang, Nan Duan, Yeyun Gong, Xiaocheng Feng, Bing Qin, Daxin Jiang8960-8967

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results on three benchmark datasets with different domains and programs show that our approach incrementally improves the accuracy. On Wiki SQL, our best model is comparable to the state-of-the-art system learned from denotations. We conduct experiments on three tasks to test our approach, including generating SQL (or SQL-like) queries for both single-turn and multi-turn questions over web tables (Zhong, Xiong, and Socher 2017; Iyyer, Yih, and Chang 2017), and predicting subject-predicate pairs over a knowledge graph (Bordes et al. 2015).
Researcher Affiliation Collaboration 1Harbin Institute of Technology, Harbin, China 2Microsoft Research Asia, Beijing, China 3Microsoft Search Technology Center Asia, Beijing, China
Pseudocode Yes Algorithm 1 Low-Resource Neural Semantic Parsing with Back-Translation and MAML
Open Source Code No The paper does not contain an explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes We conduct experiments on Wiki SQL (Zhong, Xiong, and Socher 2017), which provides 87,726 annotated question-SQL pairs over 26,375 web tables... We use Sequential QA (Iyyer, Yih, and Chang 2017) for evaluation... We use Simple Questions (Bordes et al. 2015) as the testbed, where the lf is in the form of a simple λ-calculus like λx.predicate(subject, x).
Dataset Splits No The paper mentions using training data and refers to a 'devset' for learning curves in Figure 2, and states 'We conduct experiments on Wiki SQL (Zhong, Xiong, and Socher 2017)'. However, it does not explicitly provide the specific percentages or counts for training, validation, and test splits, nor does it clearly state that standard predefined splits from the cited datasets are used for their experiments.
Hardware Specification No The paper does not provide any specific details about the hardware used for running its experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions software components such as 'LSTM', 'LSTM-CRF', 'BM25 built on Elasticsearch', and 'Match-LSTM', but does not provide specific version numbers for these or any other software dependencies like programming languages or deep learning frameworks.
Experiment Setup No The paper describes the general architecture of its base models and how different data sources are combined for training, but it does not provide specific hyperparameter values such as learning rates, batch sizes, number of epochs, or detailed optimizer settings needed for reproducibility.