Cascade Dynamics Modeling with Attention-based Recurrent Neural Network

Authors: Yongqing Wang, Huawei Shen, Shenghua Liu, Jinhua Gao, Xueqi Cheng

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both synthetic and real world datasets demonstrate the proposed models outperform state-of-the-art models at both cascade prediction and inferring diffusion tree. In experiments, we compare our CYAN-RNN to the state-of-the-art modeling methods of cascade prediction on both synthetic and real data.
Researcher Affiliation Academia 1CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China
Pseudocode No The paper describes the model architecture and optimization process using mathematical formulas and prose, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository.
Open Datasets No The real data is from Sina Weibo, a Chinese microblog website. The data is from June 1st, 2016 to June 30th, 2016. ... The paper does not provide concrete access information for this dataset or the synthetic datasets.
Dataset Splits Yes For synthetic data: "we randomly pick up 80% of cascades for training and the rest for validation and test by an even split." For real data: "We use 536,240 sequences for training, 29,758 for validation and 30,005 for testing."
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, memory, or cloud computing instance types used for running the experiments.
Software Dependencies No The paper mentions using Adam for optimization and GRU, but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes The hyper-parameters of CYAN-RNN and CYANRNN(cov) are set as follows: learning rate is 0.0001; hidden layer size of encoder is 20; hidden layer size of decoder is 10; length of dependence is 200; coverage size is 10; and batch size is 128. We apply stochastic gradient descent (SGD) with minibatch and the parameters are updated by Adam [Kingma and Adam, 2015].