Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Coupled Multi-Layer Attentions for Co-Extraction of Aspect and Opinion Terms
Authors: Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, Xiaokui Xiao
AAAI 2017 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three benchmark datasets in Sem Eval Challenge 2014 and 2015 show that our model achieves stateof-the-art performances compared with several baselines. We conduct extensive experiments on three benchmark datasets to verify that our model achieves state-of-the-art performance for aspect and opinion terms co-extraction. |
| Researcher Affiliation | Collaboration | Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, Xiaokui Xiao Nanyang Technological University, Singapore SAP Innovation Center Singapore EMAIL, {d.dahlmeier}@sap.com |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about releasing its own source code, nor does it include a link to a code repository for the methodology described. It mentions third-party tools like 'word2vec tool4' and 'Theano library6' but not its own implementation. |
| Open Datasets | Yes | We evaluate and compare our proposed model on three benchmark datasets, as described in Table 1. They are taken from Sem Eval Challenge 2014 task 4 subtask 1 (Pontiki et al. 2014) and Sem Eval Challenge 2015 task 12 subtask 1 (Pontiki et al. 2015). For restaurant domain, we apply word2vec on Yelp Challenge dataset5 consisting of 2.2M restaurant reviews... For laptop domain, we use the corpus from electronic domain in Amazon reviews (Mc Auley et al. 2015). |
| Dataset Splits | Yes | Dataset Description Training Test Total S1 Sem Eval-14 Restaurant 3,041 800 3,841 S2 Sem Eval-14 Laptop 3,045 800 3,845 S3 Sem Eval-15 Restaurant 1,315 685 2,000. Note that all the above parameters are chosen through cross-validation. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using 'word2vec tool4' and 'Theano library6' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | The size of the hidden units for each layer is 50 for all three datasets. We use a 2-layer attention network for experiments. For each layer, the ๏ฌrst dimension K of tensors is set to be 20 for S1 and S3 (15 for S2). We use a ๏ฌxed learning rate for all experiments: 0.07 for S1, S3, and 0.1 for S2. The dropout rate is set to be 0.5 for non-recurrent parameters of GRU. |