Robust Subset Selection by Greedy and Evolutionary Pareto Optimization

Authors: Chao Bian, Yawen Zhou, Chao Qian

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we empirically compare the performance of the greedy algorithm, EPORSS and two previous algorithms, modified greedy [Hou and Clark, 2021] and SATURATE [Krause et al., 2008a], on the application of robust influence maximization. ... The experiments are performed on two real-world data sets, ego-Facebook and as-733, downloaded from https://snap.stanford.edu/data/index.html.
Researcher Affiliation Academia Chao Bian, Yawen Zhou and Chao Qian State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China bianc@lamda.nju.edu.cn, zyw8769@gmail.com, qianc@lamda.nju.edu.cn
Pseudocode Yes Algorithm 1 Greedy Algorithm Input: all items V = {v1, v2, . . . , vn}, the objective function F = min1 i m fi, and a budget k Output: a subset of V with k items Process: 1: Let j = 0 and Xj = ; 2: while j < k do 3: Let v = arg maxv V \Xj F(Xj {v}); 4: Let Xj+1 = Xj {v }, and j = j + 1 5: end while 6: return Xk
Open Source Code No The supplementary material is available at https://arxiv.org/abs/2205.01415.
Open Datasets Yes The experiments are performed on two real-world data sets, ego-Facebook and as-733, downloaded from https://snap.stanford.edu/data/index.html.
Dataset Splits No The paper mentions using 'ego-Facebook' and 'as-733' datasets but does not provide specific details on how these datasets were split into training, validation, or test sets for reproduction, nor does it specify if predefined splits were used.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, that would be needed to replicate the experiment.
Experiment Setup Yes The number of iterations of EPORSS is set to 2ek2n as suggested by Theorem 2. ... Specifically, for each network in as-733, we set the probability pv(u, S) to min{0.1 + 0.05 |S|, 1}, i.e., the probability of activating v is 0.1 for the first try, and then the probability increases by 0.05 once a try fails. ... To estimate the influence spread σ(X) of a subset X of nodes, we simulate the diffusion process 100 times independently and use the average as an estimation.