An Adaptive Hybrid Framework for Cross-domain Aspect-based Sentiment Analysis

Authors: Yan Zhou, Fuqing Zhu, Pu Song, Jizhong Han, Tao Guo, Songlin Hu14630-14637

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on four public datasets and the experimental results show that our framework significantly outperforms the state-of-the-art methods.
Researcher Affiliation Academia 1Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China 2School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
Pseudocode No The paper describes the architecture and training process in text and diagrams, but it does not include pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper provides links to external tools (word2vec, Yelp dataset challenge) used in the experiments but does not provide a link or statement about the availability of the source code for the proposed AHF framework.
Open Datasets Yes We conduct experiments on four benchmark datasets: Restaurant(R), Laptop(L), Device(D) and Service(S). ... The restaurant data is comprised by the restaurant reviews from Sem Eval 2014 (Pontiki et al. 2014), Sem Eval 2015 (Pontiki et al. 2015) and Sem Eval 2016 (Pontiki et al. 2016). The laptop data consists of laptop reviews in Sem Eval 2014 (Pontiki et al. 2014). The device data is created by Hu and Liu (Hu and Liu 2004)... The service datasets contains reviews from the web service and is introduced by Toprak et al. (Toprak, Jakob, and Gurevych 2010).
Dataset Splits Yes The training dataset of each transfer pair contains the labeled training data of the source domain and the unlabeled training data of the target domain. We use the labeled testing data of the source domain as the validation set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using 'Stanford Parser (Manning et al. 2014)' and 'word2vec tool' but does not specify version numbers for Python, deep learning frameworks (e.g., PyTorch, TensorFlow), or other relevant software libraries.
Experiment Setup Yes The dimensions of the word embedding and POS embedding are set to 100 and 15, respectively. We employ two layers Bi LSTM in our experiment and the hidden units of LSTM is set to 100. We apply dropout over the embeddings layers and Bi LSTM layers with the dropout rate 0.5. The parameters about adversarial learning η and λadv are set 1 and 0.1, respectively. The value of the smoothing coefficient parameter γ is 0.98. The values of β, ρ1 and ρ2 are set as 60, 0.9 and 0.4, respectively. We set the batch size of the source domain data and target domain data as 32. The parameters are optimized by RMSprop algorithm with learning rate 0.001.