Non-Monotone Adaptive Submodular Maximization
Authors: Alkis Gotovos, Amin Karbasi, Andreas Krause
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We have evaluated our proposed algorithm on the two objective functions described in the previous section, namely influence maximization and maximum cut, on a few real-world data sets. |
| Researcher Affiliation | Academia | Alkis Gotovos ETH Zurich Amin Karbasi Yale University Andreas Krause ETH Zurich |
| Pseudocode | Yes | Algorithm 1 Adaptive random greedy |
| Open Source Code | No | No explicit statement or link regarding the public release of source code was found. |
| Open Datasets | Yes | For our experiments, we used networks from the KONECT2 database, which accumulates network data sets from various other sources. [...] [Mc Auley and Leskovec, 2012]. |
| Dataset Splits | No | The paper does not provide explicit details about train/validation/test dataset splits for model training in a traditional sense. It mentions subsampling networks and evaluating on 'random realizations' and 'random ground sets' but no percentages or counts for distinct training, validation, and testing partitions. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were mentioned. |
| Software Dependencies | No | No specific software dependencies with version numbers were listed (e.g., Python 3.x, TensorFlow x.x). |
| Experiment Setup | Yes | For the influence maximization objective, the influence propagation probability of each edge is chosen to be p = 0.1, and for the maximum cut objective, selecting a node cuts that node or one of its neighbors with equal probability. [...] we subsample each network down to 2000 nodes, [...] select uniformly at random a subset of 100 nodes as the ground set E, and repeat the experiments for 50 such random ground sets. |