Model-Independent Online Learning for Influence Maximization
Authors: Sharan Vaswani, Branislav Kveton, Zheng Wen, Mohammad Ghavamzadeh, Laks V. S. Lakshmanan, Mark Schmidt
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental evaluation suggests that our framework is robust to the underlying diffusion model and can efficiently learn a near-optimal solution. 8. Experiments |
| Researcher Affiliation | Collaboration | 1University of British Columbia 2Adobe Research 3Deep Mind (The work was done when the author was with Adobe Research). |
| Pseudocode | Yes | Algorithm 1 Diffusion-Independent Lin UCB (DILin UCB) |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described, nor does it explicitly state that the code is publicly available. |
| Open Datasets | Yes | We choose the social network topology G as a subgraph of the Facebook network available at (Leskovec & Krevl, 2014) |
| Dataset Splits | Yes | all hyper-parameters for our algorithm are set using an initial validation set of 500 rounds. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | all hyper-parameters for our algorithm are set using an initial validation set of 500 rounds. The best validation performance was observed for λ = 10 4 and σ = 1. We compare DILin UCB against the CUCB algorithm (Chen et al., 2016) in both the IC model and the LT model, with K = 10. In Figure 3(a), we quantify the effect of varying d when the underlying diffusion model is IC and make the following observations: (i) The cumulative regret for both d = 10 and d = 100 is higher than that for d = 50. In Figures 3(b) and 3(c), we show the effect of varying K on the per-step reward. We compare CUCB and the independent version of our algorithm when the underlying model is IC and LT. We make the following observations: (i) For both IC and LT, the per-step reward for all methods increases with K. (ii) For the IC model, the perstep reward for our algorithm is higher than CUCB when K = {5, 10, 20}, but the difference in the two spreads decreases with K. For K = 50, CUCB outperforms our algorithm. |