Learning Latent Sentiment Scopes for Entity-Level Sentiment Analysis

Authors: Hao Li, Wei Lu

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on the standard datasets demonstrate that our approach is able to achieve better results compared to existing approaches based on conventional conditional random fields (CRFs) and a more recent work based on neural networks. We mainly performed most of our experiments based on the dataset from (Mitchell et al. 2013). Following previous research efforts (Mitchell et al. 2013; Zhang, Zhang, and Vo 2015), we report 10-fold crossvalidation results, and split 10% of the training set for development.
Researcher Affiliation Academia Hao Li, Wei Lu Singapore University of Technology and Design 8 Somapah Road, Singapore, 487372 hao li@mymail.sutd.edu.sg, luwei@sutd.edu.sg
Pseudocode No The paper describes algorithms and model components (e.g., generalized forward-backward style algorithm, Viterbi algorithm) but does not provide any structured pseudocode or algorithm blocks.
Open Source Code Yes We make our code, system and supplementary material available at http://statnlp.org/research/st/.
Open Datasets Yes We mainly performed most of our experiments based on the dataset from (Mitchell et al. 2013). This dataset consists of 7,105 Spanish tweets and 2,350 English tweets, with named entities and their sentiment information annotated. We also carried out additional experiments based on datasets from Sem Eval 2016 and TASS 2015.
Dataset Splits Yes Following previous research efforts (Mitchell et al. 2013; Zhang, Zhang, and Vo 2015), we report 10-fold crossvalidation results, and split 10% of the training set for development.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as CPU/GPU models, memory, or specific computing platforms.
Software Dependencies No The paper mentions using 'LBFGS as the optimization algorithm', 'CRF model', and specific lexicons ('MPQA lexicon', 'Senti Word Net lexicon') but does not provide version numbers for any software components.
Experiment Setup Yes We aim to minimize the negative joint log-likelihood of our dataset with L2 regularization, which is defined as: ... + λw T w (2) where (x(i), y(i)) is the i-th training instance and λ is the L2 regularization parameter. In this work, we choose to use LBFGS (Liu and Nocedal 1989) as the optimization algorithm. We tuned L using the development set, where L = 6 for English, and L = 7 for Spanish.