Target-Aspect-Sentiment Joint Detection for Aspect-Based Sentiment Analysis

Authors: Hai Wan, Yufei Yang, Jianfeng Du, Yanan Liu, Kunxun Qi, Jeff Z. Pan9122-9129

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the Sem Eval-2015 and Sem Eval-2016 restaurant datasets show that the proposed method achieves a high performance in detecting target-aspect-sentiment triples even for the implicit target cases; moreover, it even outperforms the state-of-the-art methods for those subtasks of target-aspect-sentiment detection that they are competent to.
Researcher Affiliation Academia 1School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510006, P.R.China 2Guangzhou Key Laboratory of Multilingual Intelligent Processing, Guangdong University of Foreign Studies, Guangzhou 510006, P.R.China 3Department of Computing Science, The University of Aberdeen, Aberdeen AB24 3UE, UK
Pseudocode No The paper describes the model architecture and steps, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes 1Code and experimental datasets for TAS-BERT are available at https://github.com/sysulic/TAS-BERT
Open Datasets Yes We conducted experiments on two datasets in the restaurant domain, where one (denoted Res15) is from Sem Eval-2015 Task 12 and the other (denoted Res16) is from Sem Eval-2016 Task 5.
Dataset Splits No Table 1 reports the statistics on the two experimental datasets. This table only shows 'Train' and 'Test' sets, without explicitly detailing a 'validation' set or how it was used if present.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions using 'the pre-trained language model BERT' and links to its GitHub, but does not provide specific version numbers for BERT, Python, PyTorch/TensorFlow, or any other ancillary software dependencies used for the experiments.
Experiment Setup Yes To train these models, we set the dropout probability as 0.1 for all layers, the max sequence length as 128, the learning rate as 2e-5, and the maximum number of epochs as 30.