Nearly Linear-Time, Parallelizable Algorithms for Non-Monotone Submodular Maximization
Authors: Alan Kuhnle8200-8208
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate our algorithm in comparison with the state-of-the-art parallelizable algorithms... Our results are summarized as follows. Our algorithm ATG obtains the best objective value of any of the parallelizable algorithms... Empirically, we demonstrate that heuristic versions of both of our algorithms achieve superior objective value to current state-of-the-art algorithms while using a small number of queries and adaptive rounds on two applications of SMCC. |
| Researcher Affiliation | Academia | Alan Kuhnle Department of Computer Science, Florida State University, Tallahassee, Florida kuhnle@cs.fsu.edu |
| Pseudocode | Yes | Algorithm 1 The ADAPTIVESIMPLETHRESHOLD Algorithm and Algorithm 2 The ADAPTIVETHRESHOLDGREEDY Algorithm |
| Open Source Code | No | The full version is available at https://arxiv.org/abs/2009.01947. (This link is to the paper itself, not source code.) No other explicit statement about open-sourcing the code is provided. |
| Open Datasets | Yes | We evaluate on a variety of network technologies from the Stanford Large Network Dataset Collection (Leskovec and Krevl 2020). |
| Dataset Splits | No | The paper mentions evaluating on specific network datasets (e.g., web-Google) but does not provide explicit details about train/validation/test splits, percentages, or sample counts in the provided text. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU, CPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using and comparing various algorithms and approaches but does not provide specific version numbers for any software, libraries, or programming languages used in the implementation or experiments. |
| Experiment Setup | Yes | For all algorithms, accuracy parameter ε was set to 0.1; 100 samples were used to evaluate expectations in all adaptive algorithms (thus, these algorithms were run as heuristics with no performance guarantee). Randomized algorithms are averaged over 20 independent repetitions, and the mean is reported. The standard deviation is indicated by a shaded region in the plots. Any algorithm that requires a subroutine for UNCONSTRAINEDMAX is implemented to use a random set, which is a (1/4)-approximation by Feige, Mirrokni, and Vondr ak (2011). |