Aspect-Based Sentiment Analysis with Explicit Sentiment Augmentations
Authors: Jihong Ouyang, Zhiyao Yang, Silong Liang, Bing Wang, Yimeng Wang, Ximing Li
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test ABSA-ESA on two ABSA benchmarks. The results show that ABSA-ESA outperforms the SOTA baselines on implicit and explicit sentiment accuracy. |
| Researcher Affiliation | Academia | Jihong Ouyang1,2*, Zhiyao Yang1,2*, Silong Liang1,2, Bing Wang1,2, Yimeng Wang1, Ximing Li1,2 1College of Computer Science and Technology, Jilin University, China 2Key Laboratory of Symbolic Computation and Knowledge Engineering of MOE, Jilin University, China {ouyj@, zhiyaoy20@mails., liangsl23@mails.}jlu.edu.cn |
| Pseudocode | Yes | Algorithm 1: t-th Step Constrained Beam Search ... For clarity, Algorithm 1 provides the i-th CBS step process. |
| Open Source Code | No | The paper does not provide any statements about making its source code publicly available, nor does it include a link to a code repository for the methodology described. |
| Open Datasets | Yes | In our experiments, we use the dataset released by Li et al. (2021b), which has already been labeled with explicit and implicit sentiment. |
| Dataset Splits | No | The paper mentions 'train' and 'test' datasets in Table 1 and discusses training process details, but it does not specify explicit validation dataset splits (percentages or counts) or reference predefined validation splits. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory amounts) used to run the experiments. |
| Software Dependencies | No | The paper mentions models like T5 and BERT, but it does not provide specific version numbers for software dependencies (e.g., Python, PyTorch, TensorFlow, or specific library versions) needed for reproducibility. |
| Experiment Setup | Yes | In the training process, we set the learning rate as 5 10 5 (Restaurant) and 2 10 5 (Laptop). The batch size is 4 (Restaurant) and 8 (Laptop). We set the training epoch as 15 for both datasets. We set kc=2 in data selection and kn=4 for Unlikelihood Contrastive Regularization. For Constrained Beam Search, we select V =6 and z=3. |