Bidirectional Machine Reading Comprehension for Aspect Sentiment Triplet Extraction

Authors: Shaowei Chen, Yu Wang, Jie Liu, Yuelin Wang12666-12674

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To verify the effectiveness of our approach, we conduct extensive experiments on four benchmark datasets. The experimental results demonstrate that BMRC achieves state-of-the-art performances.
Researcher Affiliation Collaboration 1College of Artificial Intelligence, Nankai University, Tianjin, China 2Cloopen Research, Beijing, China
Pseudocode No The paper describes the model architecture and equations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code is available at: https://github.com/NKU-IIPLab/BMRC.
Open Datasets Yes we conduct experiments on four benchmark datasets5 from the Sem Eval ABSA Challenges (Pontiki et al. 2014, 2015, 2016) and list the statistics of these datasets in Table 1. 5https://github.com/xuuuluuu/Sem Eval-Triplet-data
Dataset Splits Yes Datasets Train Dev Test #S #T #S #T #S #T 14-Lap (Pontiki et al. 2014) 920 1265 228 337 339 490... According to the triplet extraction F1score on the development sets, the threshold δ is manually tuned to 0.8 in bound [0, 1) with step size set to 0.1.
Hardware Specification Yes We run our model on a Tesla V100 GPU and train our model for 40 epochs in about 1.5h.
Software Dependencies No The paper mentions adopting 'BERT-base' but does not specify other software dependencies with version numbers (e.g., Python, PyTorch versions, etc.).
Experiment Setup Yes During training, we use Adam W (Loshchilov and Hutter 2017) for optimization with weight decay 0.01 and warmup rate 0.1. The learning rate for training classifiers and the fine-tuning rate for BERT are set to 1e-3 and 1e-5 respectively. Meanwhile, we set batch size to 4 and dropout rate to 0.1.