Coupled Multi-Layer Attentions for Co-Extraction of Aspect and Opinion Terms
Authors: Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, Xiaokui Xiao
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three benchmark datasets in Sem Eval Challenge 2014 and 2015 show that our model achieves stateof-the-art performances compared with several baselines. We conduct extensive experiments on three benchmark datasets to verify that our model achieves state-of-the-art performance for aspect and opinion terms co-extraction. |
| Researcher Affiliation | Collaboration | Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, Xiaokui Xiao Nanyang Technological University, Singapore SAP Innovation Center Singapore {wa0001ya, sinnopan, xkxiao}@ntu.edu.sg, {d.dahlmeier}@sap.com |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about releasing its own source code, nor does it include a link to a code repository for the methodology described. It mentions third-party tools like 'word2vec tool4' and 'Theano library6' but not its own implementation. |
| Open Datasets | Yes | We evaluate and compare our proposed model on three benchmark datasets, as described in Table 1. They are taken from Sem Eval Challenge 2014 task 4 subtask 1 (Pontiki et al. 2014) and Sem Eval Challenge 2015 task 12 subtask 1 (Pontiki et al. 2015). For restaurant domain, we apply word2vec on Yelp Challenge dataset5 consisting of 2.2M restaurant reviews... For laptop domain, we use the corpus from electronic domain in Amazon reviews (Mc Auley et al. 2015). |
| Dataset Splits | Yes | Dataset Description Training Test Total S1 Sem Eval-14 Restaurant 3,041 800 3,841 S2 Sem Eval-14 Laptop 3,045 800 3,845 S3 Sem Eval-15 Restaurant 1,315 685 2,000. Note that all the above parameters are chosen through cross-validation. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using 'word2vec tool4' and 'Theano library6' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | The size of the hidden units for each layer is 50 for all three datasets. We use a 2-layer attention network for experiments. For each layer, the first dimension K of tensors is set to be 20 for S1 and S3 (15 for S2). We use a fixed learning rate for all experiments: 0.07 for S1, S3, and 0.1 for S2. The dropout rate is set to be 0.5 for non-recurrent parameters of GRU. |