Prediction-Centric Learning of Independent Cascade Dynamics from Partial Observations

Authors: Mateusz Wilinski, Andrey Lokhov

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we test the SLICER algorithm on synthetic data. We start by presenting a comparison with established methods, and then empirically evaluate the performance of our algorithm on a variety of topologies, including random graphs, as well as regular lattices and real-world networks with a very large number of loops.
Researcher Affiliation Academia 1Theoretical Division, Los Alamos National Laboratory, Los Alamos, USA.
Pseudocode No The complete form of the algorithm along with the full derivation is presented in the Supplementary Materials, section S1 (Wilinski & Lokhov, 2020), where we also discuss the selection of ε and its relation to the convergence of our learning procedure. This implies that the pseudocode is not in the main paper.
Open Source Code Yes A full implementation of our algorithms is available at (Wilinski & Lokhov, 2021), whereas the supplementary materials can be found in (Wilinski & Lokhov, 2020).
Open Datasets Yes We use two web networks for numerical tests: one representing the snapshot of the structure of the Internet at the level of autonomous systems (Rossi & Ahmed, 2015), and the other obtained by a web-crawler (Boldi et al., 2004).
Dataset Splits No The paper does not provide specific train/validation/test dataset splits with percentages or counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No For the optimization of maximum likelihood, we use the optimization software Ipopt (W achter & Biegler, 2006) within the Julia/Ju MP modeling framework for mathematical optimization (Dunning et al., 2017). No specific version numbers for these software components are provided.
Experiment Setup Yes We initialise the learning process with αij = 0.5 (ij) E, unless stated otherwise. In all the tests below, unless stated otherwise, (i) parameters αij are sampled uniformly from [0, 1]; (ii) each cascade is generated independently from the IC model with limited T, varying from 4 to 20; (iii) the source of every cascade is a single, randomly chosen, node and (iv) hidden nodes are chosen uniformly at random.