Aspect Sentiment Classification with both Word-level and Clause-level Attention Networks
Authors: Jingjing Wang, Jie Li, Shoushan Li, Yangyang Kang, Min Zhang, Luo Si, Guodong Zhou
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on the laptop and restaurant datasets from Sem Eval-2015 demonstrate the effectiveness of our proposed approach to aspect sentiment classification. |
| Researcher Affiliation | Collaboration | 1School of Computer Science and Technology, Soochow University, China 2Alibaba Group, China 3School of Computer Science and Engineering, Southeast University, China |
| Pseudocode | No | The paper includes architectural diagrams and mathematical formulas but no blocks explicitly labeled 'Pseudocode' or 'Algorithm'. |
| Open Source Code | No | The paper mentions 'The word embedding resource is released at https://github.com/jjwangnlp/PTE2ASC' but this refers to a resource used, not the source code for the main methodology described in the paper. |
| Open Datasets | Yes | We conduct experiments on two datasets (i.e., one from the laptop domain and the other from the restaurant domain) from Sem Eval-2015 Task 121 [Pontiki et al., 2015] to validate the effectiveness of our approach. 1The detail introduction of this task is available at http://alt.qcri.org/semeval2015/task12/ |
| Dataset Splits | Yes | We also set aside 10% from the training set as the development data which is used to tune algorithm parameters. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU models, or memory used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adagrad' for optimization and a 'Discourse Segmenter Tool' but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | The dimensions of attention vectors and LSTM hidden states are set to be 300. Specifically, the initial learning rate is 0.1. The regularization weight of the parameters is 10^-5, and the dropout rate is set to 0.25. |