Nonbacktracking Bounds on the Influence in Independent Cascade Models

Authors: Emmanuel Abbe, Sanjeev Kulkarni, Eun Jee Lee

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper develops upper and lower bounds on the influence measure in a network, more precisely, the expected number of nodes that a seed set can influence in the independent cascade model. In particular, our bounds exploit nonbacktracking walks, Fortuin Kasteleyn Ginibre type inequalities, and are computed by message passing algorithms. Nonbacktracking walks have recently allowed for headways in community detection, and this paper shows that their use can also impact the influence computation. Further, we provide parameterized versions of the bounds that control the trade-off between the efficiency and the accuracy. Finally, the tightness of the bounds is illustrated with simulations on various network models. and In this section, we evaluate the NB-UB and NB-LB in independent cascade models on a variety of classical synthetic networks.
Researcher Affiliation Academia Emmanuel Abbe1 2 Sanjeev Kulkarni2 Eun Jee Lee1 1Program in Applied and Computational Mathematics 2The Department of Electrical Engineering Princeton University {eabbe, kulkarni, ejlee}@princeton.edu
Pseudocode Yes Algorithm 1 Nonbacktracking Upper Bound (NB-UB) and Algorithm 2 Nonbacktracking Lower Bound (NB-LB)
Open Source Code No The paper does not provide any explicit statement about open-sourcing the code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes Network Generation. We consider 4 classical random graph models with the parameters shown as follows: Erdos Renyi random graphs with ER(n = 1000, p = 0.003), scale-free networks SF(n = 1000, α = 2.5), random regular graphs Reg(n = 1000, d = 3), and random tree graphs with power-law degree distributions T(n = 1000, α = 3). For each graph model, we generate 100 networks IC(G, p A, {s}) as follows.
Dataset Splits No The paper describes generating synthetic networks and evaluating bounds against Monte Carlo simulations. However, it does not specify traditional training, validation, or test dataset splits as would be typical for a machine learning model, as the methods presented are bound computation algorithms rather than models that are 'trained'.
Hardware Specification No The paper does not provide specific details regarding the hardware used to run the experiments (e.g., GPU models, CPU types, or cloud instance specifications).
Software Dependencies No The paper does not provide specific version numbers for any software dependencies, libraries, or programming languages used in the experiments.
Experiment Setup Yes Network Generation. We consider 4 classical random graph models with the parameters shown as follows: Erdos Renyi random graphs with ER(n = 1000, p = 0.003), scale-free networks SF(n = 1000, α = 2.5), random regular graphs Reg(n = 1000, d = 3), and random tree graphs with power-law degree distributions T(n = 1000, α = 3). For each graph model, we generate 100 networks IC(G, p A, {s}) as follows. The graph G is the largest connected component of a graph drawn from the graph model, the seed node s is a randomly selected vertex, and A is the adjacency matrix of G. The corresponding IC model has the same transmission probability p for every edge. Evaluation of Bounds. For each network generated, we compute the following quantities for each p {0.1, 0.2, . . . , 0.9}. σmc: the estimation of the influence with 106 Monte Carlo simulations. ... The sample size of 10 is determined to overly match the computational complexity of NB-LB algorithm.