A Recurrent Neural Cascade-based Model for Continuous-Time Diffusion

Authors: Sylvain Lamprier

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform experiments on one artificial and three realworld datasets:
Researcher Affiliation Academia Sylvain Lamprier 1 Sorbonne Universit es, LIP6, F-75005, Paris, France.
Pseudocode No Our full efficient algorithm is given in the supplementary material 2.
Open Source Code Yes The full code in python is available at: https://github.com/lampriers/rec CTIC
Open Datasets Yes Weibo: Retweet cascades extracted from the Weibo microbloging website using the procedure described in (Leskovec et al., 2009). The dataset was collected by (Fu et al., 2013). Memetracker: The memetracker dataset described in (Leskovec et al., 2009) contains millions of blog posts and news articles.
Dataset Splits Yes 10000 episodes for training, 5000 for validation, 5000 for testing. ... 20000 episodes for training, 5000 for validation, 5000 for testing. ... 45000 episodes for training, 5000 for validation, 5000 for testing. ... 250000 for training, 5000 for validation, 5000 for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions 'The full code in python' but does not specify any software names with version numbers for libraries, frameworks, or other dependencies.
Experiment Setup Yes For every model with an embedding space (i.e., all except CTIC), we set its dimension to d = 50... The reported results for our model use a GRU module as the recurrent state transformation function fφ. The optimization is done using the ADAM optimizer (Kingma & Ba, 2014) over mini-batches of M episodes ordered by length to avoid padding (M = 512 and K = 1 in our experiments).