Dynamic influence maximization

Authors: Binghui Peng

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We initiate a systematic study on dynamic influence maximization (DIM). In the DIM problem, one maintains a seed set S of at most k nodes in a dynamically involving social network, with the goal of maximizing the expected influence spread while minimizing the amortized updating cost. We consider two evolution models. In the incremental model, the social network gets enlarged over time and one only introduces new users and establishes new social links, we design an algorithm that achieves (1 1/e ϵ)-approximation to the optimal solution and has k poly(log n, ϵ 1) amortized running time, which matches the state-of-art offline algorithm with only poly-logarithmic overhead. In the fully dynamic model, users join in and leave, influence propagation gets strengthened or weakened in real time, we prove that under the Strong Exponential Time Hypothesis (SETH), no algorithm can achieve 2 (log n)1 o(1)-approximation unless the amortized running time is n1 o(1). On the technical side, we exploit novel adaptive sampling approaches that reduce DIM to the dynamic MAX-k coverage problem, and design an efficient (1 1/e ϵ)-approximation algorithm for it. Our lower bound leverages the recent developed distributed PCP framework.
Researcher Affiliation Academia Binghui Peng Columbia University bp2601@columbia.edu
Pseudocode Yes Algorithm 1 INITIALIZE, Algorithm 2 ESTIMATE, Algorithm 3 BUILDGRAPH, Algorithm 4 INSERT-NODE(u), Algorithm 5 INSERT-EDGE(u, v), Algorithm 6 INITIALIZE, Algorithm 7 REVOKE(i), Algorithm 8 INSERT-EDGE-COV(u, v)
Open Source Code No The paper does not provide any concrete access to source code for the methodology described. There are no links or statements about code availability.
Open Datasets No The paper is theoretical and does not conduct empirical studies with datasets; hence, it does not use or provide information about publicly available training datasets.
Dataset Splits No The paper is theoretical and does not conduct empirical studies; therefore, it does not provide details about training, validation, or test dataset splits needed for reproduction of experiments.
Hardware Specification No The paper is theoretical and does not describe any experimental setup that would require specific hardware. No hardware specifications are mentioned.
Software Dependencies No The paper describes theoretical algorithms and proofs; it does not mention any specific software dependencies or versions required to replicate an experimental setup.
Experiment Setup No The paper is theoretical and focuses on algorithm design and proofs. It does not describe an experimental setup with hyperparameters or system-level training settings.