A Neural Network Approach to Verb Phrase Ellipsis Resolution
Authors: Wei-Nan Zhang, Yue Zhang, Yuanxing Liu, Donglin Di, Ting Liu7468-7475
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that the neural models outperform the state-of-the-art baselines in both subtasks and the end-to-end results. |
| Researcher Affiliation | Academia | 1Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology 2School of Engineering, Westlake University |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks explicitly labeled as such. Figure 2 shows a framework diagram, not pseudocode. |
| Open Source Code | Yes | We release the extended corpus and code for VPE resolution research. |
| Open Datasets | Yes | We use the dataset released by Bos and Spenader (2011). |
| Dataset Splits | Yes | Table 3 shows the accuracies of VPE detection in 5-fold cross validation. |
| Hardware Specification | No | The paper does not specify any hardware used for running experiments, such as specific CPU, GPU, or memory details. |
| Software Dependencies | No | The paper mentions several software tools like 'scikit-learn', 'fastText', 'NLTK', and 'Berkeley parser' but does not provide specific version numbers for these dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | VPE detection. For the SVM model, the hyper parameter C = 100, γ = 0.5 and the kernel function is RBF. For the MLP model, the size of hidden state is 1,024, the learning rate equals 0.005 with 1,000 training epochs and the batch size is 64. VPE resolution. For the MLP model, the batch size equals to 64, learning rate and weight decay are both 0.005. |