SNNN: Promoting Word Sentiment and Negation in Neural Sentiment Classification
Authors: Qinmin Hu, Jie Zhou, Qin Chen, Liang He
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, the experiments conducting on the IMDB and Yelp data sets show that our approach is superior to the state-of-the-art methods. |
| Researcher Affiliation | Academia | Qinmin Hu, Jie Zhou, Qin Chen, Liang He Shanghai Key Laboratory of Multidimensional Information Processing School of Computer Science and Software Engineering East China Normal University, Shanghai, 200062, China {qmhu,lh}@cs.ecnu.edu.cn, {jzhou, qchen}@ica.stc.sh.cn |
| Pseudocode | No | The paper describes the model using mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing its source code or include a link to a code repository for the described methodology. |
| Open Datasets | Yes | We conduct experiments to evaluate the effectiveness of our proposed approach on four datasets: Yelp 2013-2015 and IMDB, which are the same as (Tang, Qin, and Liu 2015a). The statistics of the datasets are summarized in Table 2. |
| Dataset Splits | Yes | For data training, development and testing purposes, we divide the data with the proportion of 8:1:1 and the NLTK 1 tool has been adopted on all datasets for tokenization and sentence splitting. |
| Hardware Specification | No | The paper does not provide specific hardware details (like GPU/CPU models or memory) used for running its experiments within the text. |
| Software Dependencies | No | The paper mentions 'NLTK tool' for tokenization and sentence splitting but does not provide specific version numbers for NLTK or any other software dependencies. |
| Experiment Setup | No | In order to better compare with the existing Chen s and Tang s work (Chen et al. 2016; Tang, Qin, and Liu 2015a), we train our data with the same settings as Chen and Tang. The details are referred to (Chen et al. 2016; Tang, Qin, and Liu 2015a) because of the page limit. |