Semantic Parsing with Neural Hybrid Trees
Authors: Raymond Hendy Susanto, Wei Lu
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach on multilingual benchmark datasets.Table 1 shows evaluation results of our system as well as other systems from previous works under the same experimental settings. |
| Researcher Affiliation | Academia | Raymond Hendy Susanto, Wei Lu Singapore University of Technology and Design 8 Somapah Road, Singapore 487372 {raymond susanto, luwei}@sutd.edu.sg |
| Pseudocode | No | The paper describes algorithms and procedures (e.g., inside-outside, backpropagation, Viterbi decoding) but does not contain a structured pseudocode block or a clearly labeled algorithm section. |
| Open Source Code | Yes | We make our system, code and our newly created datasets on three languages available at http://www.statnlp.org/research/sp/. |
| Open Datasets | Yes | We evaluate our approach on the multilingual Geo Query dataset, which is a standard benchmark evaluation for semantic parsing (Wong and Mooney 2006; Kate and Mooney 2006; Lu et al. 2008; Jones, Johnson, and Goldwater 2012).We make our system, code and our newly created datasets on three languages available at http://www.statnlp.org/research/sp/. |
| Dataset Splits | Yes | We use the standard train/test split (600/280) in order to make our results comparable to previous works.We select these parameters through validation on the English dataset by further splitting the training set into 400 instances for training and 200 instances for tuning. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper mentions software components such as "Torch7 library" and the "L-BFGS algorithm" but does not specify their version numbers, which is required for reproducibility. |
| Experiment Setup | Yes | Our hyperparameter tuning for the neural network includes the choice of the activation function {tanh, Re LU}, the number of hidden units {50,100,150,200}, the number of hidden layers {0,1,2}, and the amount of dropout regularization {0,0.25,0.5}. Our final selection is the following: tanh activation, 100 hidden units, 1 hidden layer, and no dropout. |