Fully Dynamic Algorithm for Constrained Submodular Optimization

Authors: Silvio Lattanzi, Slobodan Mitrović, Ashkan Norouzi-Fard, Jakub M. Tarnawski, Morteza Zadimoghaddam

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we empirically evaluate our algorithm. We perform experiments using a slightly simplified variant of our algorithm; see Appendix E for more details. We focus on the number of oracle calls performed during the computation and on the quality of returned solutions. More specifically, we perform a sequence of insertions and removals of elements, and after each operation i we output a high-value set Si of cardinality at most k. For a given sequence of n operations, we plot: Total number of oracle calls our algorithm performs for each of the n operations. Quality of the average output set, i.e., Pn i=1 f(Si)/n.
Researcher Affiliation Collaboration Silvio Lattanzi Google Research silviol@google.com Slobodan Mitrovi c MIT slobo@mit.edu Ashkan Norouzi-Fard Google Research ashkannorouzi@google.com Jakub Tarnawski Microsoft Research jatarnaw@microsoft.com Morteza Zadimoghaddam Google Research zadim@google.com
Pseudocode Yes Algorithm 1 INITIALIZATION; Algorithm 2 BUCKET-CONSTRUCT(i, ℓ); Algorithm 3 INSERTION(e); Algorithm 4 DELETION(e); Algorithm 5 LEVEL-CONSTRUCT(ℓ)
Open Source Code Yes The code of our implementations can be found at https://github.com/google-research/ google-research/tree/master/fully_dynamic_submodular_maximization.
Open Datasets Yes We perform evaluations on the Enron (|V | = 36, 692, |E| = 183, 831), the ego-Twitter (|V | = 81, 306, |E| = 1, 768, 149), and the Pokec (|V | = 1, 632, 803, |E| = 30, 622, 564) graph datasets from SNAP Large Networks Data Collection [LK15].
Dataset Splits No The paper describes two types of experiments: a sliding window over graph nodes and insertion/deletion of nodes in specific orders, but does not provide specific train/validation/test dataset splits for model training or evaluation.
Hardware Specification No All experiments in this paper are run on commodity hardware.
Software Dependencies No The paper does not provide specific software names with version numbers for dependencies.
Experiment Setup No The paper describes the objective function used (dominating sets) and general data processing strategies (sliding windows, specific deletion orders) and parameter ϵ values for their algorithm (ϵ = 0.0 and ϵ = 0.2), but does not detail specific hyperparameters (e.g., learning rate, batch size, epochs) or other system-level training settings.