Gated Neural Networks for Targeted Sentiment Analysis
Authors: Meishan Zhang, Yue Zhang, Duy-Tin Vo
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our proposed model gives significantly higher accuracies compared to the current best method for targeted sentiment analysis. |
| Researcher Affiliation | Academia | 1. School of Computer Science and Technology, Heilongjiang University, Harbin, China 2. Singapore University of Technology and Design {meishan zhang, yue zhang}@sutd.edu.sg, duytin vo@mymail.sutd.edu.sg |
| Pseudocode | No | The paper contains mathematical formulations for the model (e.g., equations for hi, ri, zi, hlr, rl, rr, zl, zr, zlr, hl, ht, hr) but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | We make our system and source code public under GPL at https://github.com/SUTDNLP/NNTargetedSentiment. |
| Open Datasets | Yes | Our experimental data are collected from three sources, including Dong et al. (2014), which consists of 6,940 examples, the MPQA corpus1, from which we collected 1,467 targets that have been annotated as being positive/negative, and the corpus of Mitchell et al. (2013)2, which consists of 3,288 entities. 1http://mpqa.cs.pitt.edu/corpora/mpqa corpus/ 2http://www.m-mitchell.com/code/index.html |
| Dataset Splits | Yes | We merge the three sources of annotations, shuffle them randomly, and divide them into training, development and testing sets. Table 1 shows the corpus statistics. #Targets #+ ##0 training 9,489 2,416 2,384 4,689 development 1,036 255 272 509 testing 1,170 294 295 581 |
| Hardware Specification | No | No specific hardware details (e.g., CPU or GPU models, memory, or cloud instance types) used for running the experiments are mentioned in the paper. |
| Software Dependencies | No | The paper mentions using 'Adagrad' for optimization and the 'word2vec tool', but it does not specify any software dependencies (e.g., programming languages, libraries, or frameworks) with their version numbers required for replication. |
| Experiment Setup | Yes | Table 2 shows the values, where Hrnn denotes the dimension size of the recurrent neural layers, Hcontext denotes the dimension reduction sizes for the left context representation, the right context representation and the target representation, λ denotes the regularization hyper-parameter, α denotes the initial step value of parameter updating, and pdrop denotes the dropout value. All the matrices in the model are initialized randomly with a uniform distribution in (-0.01, 0.01). |