Text Emotion Distribution Learning via Multi-Task Convolutional Neural Network

Authors: Yuxiang Zhang, Jiamei Fu, Dongyu She, Ying Zhang, Senzhang Wang, Jufeng Yang

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments conducted on five public text datasets (i.e., Sem Eval, Fairy Tales, ISEAR, TEC, CBET) demonstrate that our proposed method performs favorably against the stateof-the-art approaches.
Researcher Affiliation Academia 1College of Computer Science and Technology, Civil Aviation University of China, Tianjin, China 2College of Computer and Control Engineering, Nankai University, Tianjin, China 3College of Comp. Sci.&Tech., Nanjing University of Aeronautics and Astronautics, Nanjing, China
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for their methodology is publicly available.
Open Datasets Yes Dataset. Sem Eval [Strapparava and Mihalcea, 2007] is a distribution dataset that contains 1250 news headlines... ISEAR [Scherer and Wallbott, 1994] consists of 7666 sentences... Fairy Tales [Alm and Sproat, 2005] contains 185 children s stories... TEC [Mohammad, 2012] includes 21,051 emotional tweets... CBET [Shahraki, 2015] consists of 76,860 tweets...
Dataset Splits Yes For the Sem Eval, we adopt the standard 1000 headlines for training and 250 headlines for testing to run our experiments. We randomly choose 90% of train samples for training, the rest 10% for testing. ... we perform 10-fold cross validation on all the above datasets, and report the average results.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments.
Software Dependencies Yes Our framework is implemented using Torch7.
Experiment Setup Yes For the CNN framework, we use filter windows of 3, 4, 5 with 100 feature maps each, dropout rate of 0.5, and minibatch size of 50 following the same routing in [Kim, 2014].