Topic Enhanced Sentiment Spreading Model in Social Networks Considering User Interest
Authors: Xiaobao Wang, Di Jin, Katarzyna Musial, Jianwu Dang989-996
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on the Twitter dataset show that the proposed model significantly outperforms several alternative methods in predicting users sentimental status. (4 Experiment section heading is also present) |
| Researcher Affiliation | Academia | 1College of Intelligence and Computing, Tianjin University, Tianjin 300350, China, 2Advanced Analytics Institute, School of Software, University of Technology Sydney, Australia, 3School of Information Science, Japan Advanced Institute of Science and Technology, Japan |
| Pseudocode | Yes | Algorithm 1 Learning TSSM |
| Open Source Code | No | The paper does not provide any explicit statements about the availability of its source code, nor does it include a link to a code repository. |
| Open Datasets | Yes | We use the public Twitter dataset (Rui et al. 2012) collected in May 2011. |
| Dataset Splits | No | The paper mentions constructing a 'training dataset' and predicting sentimental status in 'future (time t + 1)', which serves as the test data. However, it does not provide specific percentages or counts for training, validation, or test splits, nor does it explicitly mention a separate validation set. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions using 'Lib SVM (Chang and Lin 2011)', 'Weka (Frank, Hall, and Witten 2016)', and 'Senti Strength (Thelwall, Buckley, and Paltoglou 2012; Thelwall and Buckley 2013)', but it does not specify version numbers for any of these software dependencies. |
| Experiment Setup | Yes | In the experiment, to ensure that most users at least have a piece of tweet at each timestamp, we set the time step to two days. ... In practice, we set the upper bound of Δt as 2 unit timestamps (1 Δt 2) to reduce the computational complexity of the proposed model. ... we first remove all stop words, slang words, and non-English phrases. Next, we iteratively filter away words, tweets, and users such that: each word must appear in at least 3 remaining tweets, each tweet contains at least 3 remaining words, and each user has at least 20 remaining tweets. |