Burst Time Prediction in Cascades
Authors: Senzhang Wang, Zhao Yan, Xia Hu, Philip S. Yu, Zhoujun Li
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on a Sina Weibo reposting dataset demonstrate the superior performance of the proposed approach in accurately predicting the burst time of posts. |
| Researcher Affiliation | Academia | Senzhang Wang , Zhao Yan , Xia Hu , Philip S. Yu , Zhoujun Li State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China Arizona State University, Tempe, AZ 85287, USA University of Illinois at Chicago, Chicago, IL 60607, USA {szwang, yanzhao, lizj}@buaa.edu.cn, psyu@uic.edu, xia.hu@asu.edu |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of the methodology described. |
| Open Datasets | Yes | We use the public available Sina Weibo reposting dataset1 (Zhang et al. 2013) to evaluate the proposed approach. This dataset contains 1,776,950 users, 308,489,739 following relationships, 300,000 popular microblog diffusion episodes with the original microblog and all its reposts. On average each microblog has been reposted for about 80 times. 1http://arnetminer.org/Influencelocality#b2354 |
| Dataset Splits | Yes | We use 10-fold cross validation to evaluate on three metrics: F1-measure, AUC (Area Under ROC Curve) and classification accuracy. |
| Hardware Specification | No | The paper does not provide any specific hardware details (like GPU/CPU models, memory, or specific computing environments) used for running experiments. |
| Software Dependencies | No | The paper lists various learning algorithms used (e.g., 'Random Tree', 'Lib SVM') but does not provide specific version numbers for these software components or any other libraries/frameworks. |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as hyperparameter values (e.g., learning rates, batch sizes, number of epochs) or specific training configurations for the learning algorithms used. |