Adaptive Budget Allocation for Maximizing Influence of Advertisements
Authors: Daisuke Hatano, Takuro Fukunaga, Ken-ichi Kawarabayashi
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Section 5 compares performance of the algorithms through computational experiments. We implemented three adaptive policies: Policies 1 and 2, and a sensitive greedy policy defined as follows. ... In addition to the adaptive policies, we implemented a nonadaptive greedy (1 1/e)-approximation algorithm [Soma et al., 2014]. We run the algorithms for instances of the bipartite influence model. |
| Researcher Affiliation | Academia | Daisuke Hatano, Takuro Fukunaga, Ken-ichi Kawarabayashi National Institute of Informatics, Japan JST, ERATO, Kawarabayashi Large Graph Project, Japan {hatano, takuro, k keniti}@nii.ac.jp |
| Pseudocode | Yes | Policy 1 Bicriteria (1 1/e)-Approximation Policy ... Policy 2 (e 1)/(2e)-Approximation Policy |
| Open Source Code | No | The paper does not contain any explicit statement that the authors' source code for the described methodology is publicly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | For the experiments, we prepared a graph that represents user-user following information in Twitter [KONECT, 2014]. ... http://konect.uni-koblenz.de/networks/ego-twitter. |
| Dataset Splits | No | The paper states "We compute budget allocations over 500 instances by the policies, and compare their objective values by favg( ) for a policy ." and "the objective values are averaged over 500 instances for each k.", but it does not specify any training, validation, or test dataset splits or cross-validation setup for these instances. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper states "We implemented three adaptive policies" but does not provide specific version numbers for any software dependencies, libraries, or frameworks used in the implementation or experiments. |
| Experiment Setup | Yes | The parameters in the instances are set as follows: b(v) = 15 for all chosen nodes v, and the objective of the problem is defined as the maximization of the number of nodes influenced at least once. Budget k is set to a value in {20, 40, . . . , 200}. ... In the normal distribution, qvu(i) is given by exp( (i 15)2/50)/ 50 for each i 2 {1, . . . , 30} and vu 2 E; In the power law distribution, qvu(i) is given by exp(0.2(i 30))/10 for each i 2 {1, . . . , 30} and vu 2 E. |