Counterfactual-Enhanced Information Bottleneck for Aspect-Based Sentiment Analysis
Authors: Mingshan Chang, Min Yang, Qingshan Jiang, Ruifeng Xu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on five benchmark ABSA datasets show that our CEIB approach achieves superior prediction performance and robustness over the state-of-the-art baselines. |
| Researcher Affiliation | Academia | 1Shenzhen Key Laboratory for High Performance Data Mining, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences 2University of Chinese Academy of Sciences 3Harbin Institute of Technology (Shenzhen) |
| Pseudocode | No | The paper describes the methodology using text and mathematical equations but does not include a structured pseudocode or algorithm block. |
| Open Source Code | Yes | Code and data to reproduce the results in this paper is available at: https://github.com/shesshan/CEIB. |
| Open Datasets | Yes | We conduct our experiments on five benchmark ABSA datasets: REST14 and LAP14 from (Pontiki et al. 2014), REST15 from (Pontiki et al. 2015), REST16 from (Pontiki et al. 2016), and MAMS from (Jiang et al. 2019). We also use ARTS dataset (Xing et al. 2020), including REST14-ARTS and LAP14-ARTS, to test the robustness of the ABSA models. |
| Dataset Splits | Yes | We adopt the official data splits, which keep the same as in the original papers. The detailed statistics of the utilized datasets are shown in Table 1, which includes Train, Dev (for MAMS), and Test splits. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or cloud computing instance specifications used for experiments. |
| Software Dependencies | No | The paper mentions software like PyTorch, T5-XXL, and BERT, but does not provide specific version numbers for these software components or libraries. |
| Experiment Setup | Yes | We train all our models for 30 epochs. Adam is used as the optimizer with the initial learning rate as 5e 5 and the weight decay as 1e 4. The hyper-parameter α is set in range 0.5 to 1.0 and β for l2-norm regularization is adaptively set by the optimizer. |