Aspect-based Sentiment Analysis with Opinion Tree Generation

Authors: Xiaoyi Bao, Wang Zhongqing, Xiaotong Jiang, Rong Xiao, Shoushan Li

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show the superiority of our proposed method. The results also validate the tree structure is effective to generate sentimental elements. Detailed evaluation shows that our model significantly advances the state-of-the-art performance on several benchmark datasets. In this study, we use ACOS dataset [Cai et al., 2021] for our experiments. We tune the parameters of our models by grid searching on the validation dataset. Our experiments are carried out with an Nvidia RTX 3090 GPU.
Researcher Affiliation Collaboration 1 Natural Language Processing Lab, Soochow University, Suzhou, China 2 Alibaba Group, Hangzhou, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Methods are described in text and figures.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a direct link to a code repository for the methodology described.
Open Datasets Yes In this study, we use ACOS dataset [Cai et al., 2021] for our experiments. There are 2,286 sentences in Restaurant domain, and 4,076 sentences in Laptop domain. In addition, we choose 20,000 sentences from Yelp3, and 20,000 sentences from the laptop domain in Amazon4 to pre-train the proposed opinion tree generation model with the joint pre-training model. 3https://www.yelp.com/dataset 4http://jmcauley.ucsd.edu/data/amazon/
Dataset Splits Yes Following the setting from [Cai et al., 2021], we divide the original dataset into a training set, a validation set, and a testing set.
Hardware Specification Yes Our experiments are carried out with an Nvidia RTX 3090 GPU.
Software Dependencies No The paper mentions employing "T5" (referencing a Hugging Face link for the model documentation), but it does not specify a version number for T5 or any other software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used.
Experiment Setup Yes The dimension of other hidden variables of all the models is 128. The model parameters are optimized by Adam [Kingma and Ba, 2015] with a learning rate of 3e-4. The batch size is 16.