Personalized Microblog Sentiment Classification via Multi-Task Learning

Authors: Fangzhao Wu, Yongfeng Huang

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on two real-world microblog sentiment datasets validate that our approach can improve microblog sentiment classification accuracy effectively and efficiently.
Researcher Affiliation Academia Fangzhao Wu and Yongfeng Huang Tsinghua National Laboratory for Information Science and Technology, Department of Electronic Engineering, Tsinghua University, Beijing 100084, China wufangzhao@gmail.com, yfhuang@tsinghua.edu.cn
Pseudocode Yes Algorithm 1 Accelerated algorithm for updating wi.
Open Source Code No The paper does not provide any statement or link indicating that its source code is publicly available.
Open Datasets Yes The first one is a Twitter sentiment dataset (denoted as Twitter). It was created by selecting top 1,000 users with highest numbers of messages in Sentiment140 sentiment corpus1, which was crawled via Twitter API using emoticons, such as :) and :( , as queries. Our Twitter dataset contains 78,135 messages, each with a sentiment label automatically assigned by emoticons. 1http://help.sentiment140.com/for-students.
Dataset Splits Yes We manually labeled 10 randomly selected messages for each user as test data. The remaining messages were used for training. The detailed statistics of these datasets are summarized in Table 1. #Train and #Test represent the numbers of messages used for training and test respectively.
Hardware Specification Yes We implemented our algorithm using Matlab 2014a, and conducted the experiments on a computer with Intel Core i7 CPU and 16 GB RAM.
Software Dependencies Yes We implemented our algorithm using Matlab 2014a
Experiment Setup No The paper states: "The parameter values of our approach and all the baseline methods were selected via cross-validation on training data." However, it does not explicitly list the specific hyperparameter values (e.g., learning rates, batch sizes, regularization coefficients) used in the final experiments.