A Unified Model for Opinion Target Extraction and Target Sentiment Prediction
Authors: Xin Li, Lidong Bing, Piji Li, Wai Lam6714-6721
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on three benchmark datasets and our framework achieves consistently superior results. |
| Researcher Affiliation | Collaboration | 1Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong 2R&D Center Singapore, Machine Intelligence Technology, Alibaba DAMO Academy 3Tencent AI Lab, Shenzhen, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We publicly release our implementation at https://github.com/lixin4ever/E2E-TBSA. |
| Open Datasets | Yes | Our model is evaluated on two product review datasets from Sem Eval ABSA challenges (Pontiki 2014; 2015; 2016) and the Twitter dataset. ... DL (Sem Eval 2014) ... DR is the union set of the restaurant datasets from Sem Eval ABSA challenge 2014, 2015 and 2016. ... DT consists of tweets collected by (Mitchell et al. 2013). |
| Dataset Splits | Yes | For DL and DR, we regard 10% randomly held-out training data as the development set. For DT, we report the ten-fold cross validation results, as done in (Mitchell et al. 2013; Zhang, Zhang, and Vo 2015), since there is no standard train-test split for this dataset. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names like Python 3.8, PyTorch 1.9) needed to replicate the experiment. |
| Experiment Setup | Yes | Our models are trained up to 50 epochs with Adam (Kingma and Ba 2014), with β1 = β2 = 0.9, and the initial learning rate η0 = 10 3. ... We apply dropout on word embeddings and the ultimate features for prediction. The dropout rates are empirically set as 0.5. ... Both of the dimension of the hidden representations dim T h and dim S h are 50. The maximum proportion ϵ of the boundary-based scores is 0.5. The size of the context window s in the opinion-based target word detection component is 3. |