A Neural Group-wise Sentiment Analysis Model with Data Sparsity Awareness
Authors: Deyu Zhou, Meng Zhang, Linhai Zhang, Yulan He14594-14601
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three realworld datasets show that the proposed approach outperforms some state-of-the-art methods. Moreover, model analysis and case study demonstrate its effectiveness of modeling user rating biases and variances. |
| Researcher Affiliation | Academia | 1 School of Computer Science and Engineering, Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, China 2 Department of Computer Science, University of Warwick, UK |
| Pseudocode | No | The paper describes the model architecture and equations but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper provides a link (https://github.com/rktamplayo/HCSC) for a baseline model (HCSC) but does not state that the code for their proposed method (NGSAM) is openly available or provide a link for it. |
| Open Datasets | Yes | We evaluate the effectiveness of the proposed method on three real-world datasets: Yelp 2013 (Tang, Qin, and Liu 2015), Yelp 2014 (Tang, Qin, and Liu 2015) and Twitter (Go, Bhayani, and Huang 2009). |
| Dataset Splits | Yes | All the parameters are chosen based on the validation sets which are 10% of the training sets. |
| Hardware Specification | No | The paper does not provide specific details on the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | Yes | We implement the models in Tensorflow2.0. |
| Experiment Setup | Yes | The dimensions of the embeddings and the hidden states are set to 300 and the batch size is 32. The number of epochs is 10. λ1, λ2 and λ3 are all set to 1 and λ4 is 0.001. The loss function is minimized using Adam optimizer (Kingma and Ba 2014) with a learning rate of 0.001 and a dropout rate of 0.5. The numbers of user groups on Yelp 2013, Yelp 2014 and Twitter are set empirically as 10, 4 and 6 respectively. |