Multi-Entity Aspect-Based Sentiment Analysis With Context, Entity and Aspect Memory

Authors: Jun Yang, Runqi Yang, Chongjun Wang, Junyuan Xie

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results show that our CEA method achieves a significant gain over several baselines, including the state-of-the-art method for the ABSA task, and their enhanced versions, on datasets for ME-ABSA and ABSA tasks.
Researcher Affiliation Academia Jun Yang, Runqi Yang, Chongjun Wang, Junyuan Xie National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China yangjunny@126.com, runqiyang@gmail.com, chjwang@nju.edu.cn, jyxie@nju.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code Yes Code and data are available at http://www.marcpoint.com/junyang.html.
Open Datasets Yes We release our dataset, looking forward to advancing the research in fine-grained sentiment analysis. Code and data are available at http://www.marcpoint.com/junyang.html.
Dataset Splits Yes We provide a training set BC-Train for model training, a development set BC-Dev for parameter tuning and a test set BC-Test for evaluation. The statistics of our dataset are listed in Table 1.
Hardware Specification Yes Testing time is the running time on test set BC-Test on an i7-16GB-GTX1070(GPU) computer with tensorflow framework4 and batch size at testing is set to 1000 for all methods.
Software Dependencies No The paper mentions 'tensorflow framework' and 'Jieba' but does not specify their version numbers.
Experiment Setup Yes We train our model 10 iterations with a batch of 25 instances, the L2-regularization weight of 0.001, the learning rate of 0.001 for Adam optimizer. The dropout rate before LSTM and before softmax are both set to 0.5. The maximum hop count is set to 3.