Adaptive Greedy versus Non-Adaptive Greedy for Influence Maximization

Authors: Wei Chen, Binghui Peng, Grant Schoenebeck, Biaoshuai Tao590-597

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our first result shows that, for submodular influence maximization, the adaptive greedy algorithm can perform up to a (1 1/e)-fraction worse than the non-adaptive greedy algorithm, and that this ratio is tight. More specifically, on one side we provide examples where the performance of the adaptive greedy algorithm is only a (1 1/e) fraction of the performance of the non-adaptive greedy algorithm in four settings... On the other side, we prove that in any submodular cascade, the adaptive greedy algorithm always outputs a (1 1/e)-approximation to the expected number of adoptions in the optimal non-adaptive seed choice. Our second result shows that, for the general submodular cascade model with full-adoption feedback, the adaptive greedy algorithm can outperform the non-adaptive greedy algorithm by an unbounded factor. Theorem 3.1. For the full-adoption feedback model... The same result holds for the myopic feedback model.
Researcher Affiliation Collaboration Wei Chen,1 Binghui Peng,2 Grant Schoenebeck,3 Biaoshuai Tao3 1Microsoft Research, China, 2Columbia University, 3University of Michigan, Ann Arbor
Pseudocode No The paper describes algorithms in paragraph form (e.g., in Definition 2.9 for the non-adaptive greedy algorithm and its adaptive policy πg), but does not provide them in structured pseudocode blocks or clearly labeled algorithm figures.
Open Source Code No The paper does not contain any statement or link indicating the release of source code for the described methodology.
Open Datasets No The paper is theoretical and does not involve empirical experiments with datasets, thus it does not mention training datasets or their public availability.
Dataset Splits No The paper is theoretical and does not conduct empirical experiments with datasets, therefore it does not mention training, validation, or test splits.
Hardware Specification No The paper is theoretical and does not report on empirical experiments, therefore it does not specify any hardware used.
Software Dependencies No The paper is theoretical and does not report on empirical experiments, therefore it does not list specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an empirical experimental setup, therefore it does not provide details such as hyperparameters or training settings.