Adaptive Influence Maximization with Myopic Feedback

Authors: Binghui Peng, Wei Chen

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the adaptive influence maximization problem with myopic feedback under the independent cascade model: one sequentially selects k nodes as seeds one by one from a social network, and each selected seed returns the immediate neighbors it activates as the feedback available for later selections, and the goal is to maximize the expected number of total activated nodes, referred as the influence spread. We show that the adaptivity gap, the ratio between the optimal adaptive influence spread and the optimal non-adaptive influence spread, is at most 4 and at least e/(e 1), and the approximation ratios with respect to the optimal adaptive influence spread of both the non-adaptive greedy and adaptive greedy algorithms are at least 1e) and at most e2+1 (e+1)2 < 1 1e. Moreover, the approximation ratio of the non-adaptive greedy algorithm is no worse than that of the adaptive greedy algorithm, when considering all graphs. Our result confirms a long-standing open conjecture of Golovin and Krause (2011) on the constant approximation ratio of adaptive greedy with myopic feedback, and it also suggests that adaptive greedy may not bring much benefit under myopic feedback.
Researcher Affiliation Collaboration Binghui Peng Columbia University bp2601@columbia.edu Wei Chen Microsoft Research weic@microsoft.com
Pseudocode Yes Figure 1: Description for greedy and adaptive greedy.
Open Source Code No The paper does not provide any links to open-source code or explicitly state that code for the described methodology is publicly available.
Open Datasets No This paper is theoretical, focusing on mathematical proofs and algorithm analysis rather than empirical evaluation. Therefore, it does not use a publicly available dataset for training.
Dataset Splits No The paper is theoretical and does not describe empirical experiments involving dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe empirical experiments, thus no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe empirical experiments, thus no specific software dependencies with version numbers are mentioned.
Experiment Setup No The paper is theoretical and focuses on mathematical proofs and algorithm analysis. It does not describe empirical experiments with specific hyperparameters or system-level training settings.