Word-Level Contextual Sentiment Analysis with Interpretability
Authors: Tomoki Ito, Kota Tsubouchi, Hiroki Sakaji, Tatsuo Yamashita, Kiyoshi Izumi4231-4238
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using real textual datasets, we experimentally demonstrate that the proposed LEXIL is effective for improving the interpretability of SINN and that the SINN features both the high WCSA ability and high interpretability.3 Experimental Intepretability Evaluation This section experimentally evaluates the proposed method in terms of the interpretability in A) WOSL, B) LWCL, and C) GWCL using real textual datasets. |
| Researcher Affiliation | Collaboration | Tomoki Ito,1 Kota Tsubouchi,2 Hiroki Sakaji,1 Tatsuo Yamashita,2 Kiyoshi Izumi1 1Graduate School of Engineering, The University of Tokyo, 2Yahoo Japan Corporation |
| Pseudocode | Yes | Algorithm 1 LEXIL: Lexical Initialization Learning |
| Open Source Code | Yes | The dataset, code, and details will be available in http://bit.ly/ SINN20190904. |
| Open Datasets | Yes | We used the following four textual corpora including reviews and their sentiment tags for evaluation. 1) Eco Rev I and II. These two datasets are composed of comments on current (I) and future (II) economic trends and their positive or negative sentiment tags1. 2) Yahoo reviews. This dataset is composed of comments on stocks and their long (positive) or short (negative) attitude tags, extracted from financial micro-blogs.2 3) Sentiment 140. This dataset contains tweets and their positive or negative sentiment tags.3 |
| Dataset Splits | Yes | We divided each dataset into training, validation, and test datasets, as outlined in Table 1.Table 1: Dataset details for Text Corpus and Annotated data |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types) used for running experiments. |
| Software Dependencies | No | The paper mentions software components like 'LSTM' and 'skip-gram method' but does not provide specific version numbers for any libraries, frameworks, or environments. |
| Experiment Setup | Yes | We set the dimension of the hidden and embedding vectors to 200 and epoch to 50 with early stopping. We used the mean score of the five trials for evaluation. |