Multi-scale Information Diffusion Prediction with Reinforced Recurrent Networks

Authors: Cheng Yang, Jian Tang, Maosong Sun, Ganqu Cui, Zhiyuan Liu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our proposed model outperforms state-of-the-art baseline models on both microscopic and macroscopic diffusion predictions on three real-world datasets.
Researcher Affiliation Collaboration 1 Department of Computer Science and Technology, Tsinghua University, Beijing, China 4Mila-Quebec Institute for Learning Algorithms, Canada
Pseudocode No The paper describes algorithms and processes in prose and mathematical equations but does not include a formal pseudocode block or algorithm box.
Open Source Code Yes The source code of this paper can be found at https://github.com/albertyang33/ FOREST.
Open Datasets Yes Twitter [Hodas and Lerman, 2014] dataset records the tweets containing URLs during October 2010. ... Douban [Zhong et al., 2012] is a Chinese social website ... Memetracker [Leskovec et al., 2009] collects a million of news stories and blog posts from online websites and track the most frequent quotes and phrases, i.e. memes, to analyze the migration of memes among people. ... For Twitter and Douban datasets, we use pretrained Deep Walk [Perozzi et al., 2014] embedding with dimension d = 64 as initial user feature vectors f (0) v .
Dataset Splits Yes We randomly sample 80% of cascades for training, 10% for validation and the rest 10% for test.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU/CPU models or memory specifications.
Software Dependencies No The paper mentions using Adam [Kingma and Ba, 2015] optimizer, but does not specify version numbers for any software dependencies.
Experiment Setup Yes For hyper-parameter settings, the dimension of hidden state and user feature vector d = 64, controlling window size m = 3, neighbors sampled in structural context extraction Z1 = 25, Z2 = 10 for first-order and second-order aggregation, the dimension of positional embedding dpos = 8 and training data are grouped into mini-batches with batch size 16.