Extracting Verb Expressions Implying Negative Opinions
Authors: Huayi Li, Arjun Mukherjee, Jianfeng Si, Bing Liu
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results using real-life review datasets show that our approach outperforms strong baselines. |
| Researcher Affiliation | Academia | Department of Computer Science University of Illinois at Chicago, IL, USA Department of Computer Science University of Houston, TX, USA Institute for Infocomm Research, Singapore |
| Pseudocode | Yes | Algorithm 1 Extracting Verb Expressions From a Sentence |
| Open Source Code | No | The paper mentions external tools like Open NLP chunker (http://opennlp.apache.org/) and LIBSVM (http://www.csie.ntu.edu.tw/~cjlin/libsvm/) but does not provide any statement or link for the open-source code of their own methodology. |
| Open Datasets | No | We conduct our experiments using customer reviews from three electronic product domains: mouse, keyboard, and wireless router, collected from Amazon.com. |
| Dataset Splits | No | Training instances for our models are verb expressions extracted from titles of positive (5 stars) and negative (1 star) reviews. Test instances are verb expressions from both the titles and bodies of reviews whose ratings are 1 or 2 stars and they are labeled manually by two human judges. Table 2 shows the statistics of our data. The paper does not specify a validation set or clear train/validation/test split percentages. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Open NLP chunker' and 'LIBSVM' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | No | The paper describes the feature engineering and the model (Markov Networks) but does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, number of epochs) or optimizer settings. |