Anytime Information Cascade Popularity Prediction via Self-Exciting Processes

Authors: Xi Zhang, Akshay Aravamudan, Georgios C Anagnostopoulos

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We showcase CASPER s merits via experiments entailing both synthetic and real-world data, and demonstrate that it considerably improves upon prior works in terms of accuracy, especially for early-stage prediction.
Researcher Affiliation Academia 1Department of Computer Engineering & Sciences, Florida Institute of Technology, Melbourne, FL, USA. Correspondence to: Xi Zhang <zhang2012@my.fit.edu>.
Pseudocode No No explicitly labeled pseudocode or algorithm blocks were found. The paper describes algorithms and derivations but not in pseudocode format.
Open Source Code Yes Python 3.9.12 code for CASPER can be found at https: //github.com/xizhang-cc/casper.
Open Datasets Yes Released by Zhao et al. (2015), SEISMIC is a widely adopted Twitter dataset for social media popularity prediction tasks (Mishra et al., 2016; Chen & Tan, 2018; Tan & Chen, 2021).
Dataset Splits Yes In the case of (tc = 0.5 hour, t = 23.5 hour), after filtering and splitting, we end up with 21463 training, 4599 validation, and 4599 test cascades. In case of (tc = 1 hour, t = 23 hour), we end up with 29908 training, 6409 validation, and 6408 test cascades.
Hardware Specification Yes CASPER s training2 takes about 0.1207 seconds per (ti, tj) pair on a Windows 10 machine with an Intel Core i7 4720HQ CPU 2.60GHz processor and 16.0 GB of RAM.
Software Dependencies Yes Python 3.9.12 code for CASPER can be found at https: //github.com/xizhang-cc/casper.
Experiment Setup No The paper mentions using a "projected gradient descent algorithm, which is detailed in Appendix B.2" for optimization but does not provide specific hyperparameter values like learning rate, batch size, or number of epochs in the main text or appendix.