Dynamic Submodular Maximization

Authors: Morteza Monemizadeh

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we propose the first dynamic algorithm for this problem. Given a stream of inserts and deletes of elements of an underlying ground set V, we develop a dynamic algorithm that with high probability, maintains a (1/2 - epsilon)-approximation of a cardinality-constrained monotone submodular maximization for any sequence of z updates (inserts and deletes) in time O(k^2 epsilon^-3 log^5 n), where n is the maximum size of V at any time. That is, the amortized update time of our algorithm is O(k^2 epsilon^-3 log^5 n). The paper focuses on algorithm design, proofs (Lemmas, Theorem 1), and theoretical analysis of approximation ratios and update times, without presenting empirical studies with data analysis.
Researcher Affiliation Academia Morteza Monemizadeh Department of Mathematics and Computer Science TU Eindhoven, the Netherlands m.monemizadeh@tue.nl
Pseudocode Yes Basic Primitives Greedy: Input: Sets S and G and parameters tau, k. Filtering: Input: Sets R and G and parameters tau, k. Algorithm Sampling Input: A ground set V of size n = |V| and a parameter 0 < k <= n. Subroutine Insertion Input: A new element e and i_bar, V, Si, Ri, Gi for i in {0, 1, ..., i_bar}. Subroutine Deletion Input: An element e and i_bar, V, Si, Ri, Gi for i in {0, 1, ..., i_bar}. Dynamic-Submodular-Maximization Input: A Sequence S = {Update(e1), ..., Update(ez)} of element updates of an underlying ground set V where Update(el) is either Insert(el) or Delete(el).
Open Source Code No The paper does not provide any links to open-source code or explicitly state that the code for their methodology is available.
Open Datasets No The paper discusses theoretical algorithms for submodular maximization and mentions applications in machine learning (e.g., clustering, summarization) but does not conduct empirical experiments on specific datasets. Therefore, it does not specify a dataset used for training or any public availability information for one.
Dataset Splits No The paper is theoretical and does not conduct empirical experiments with datasets. Therefore, it does not provide information on training, validation, or test splits.
Hardware Specification No The paper is theoretical and focuses on algorithm design and analysis. It does not describe any empirical experiments, and therefore, no hardware specifications for running experiments are mentioned.
Software Dependencies No The paper describes a theoretical algorithm and its properties. It does not conduct empirical experiments or specify any software dependencies with version numbers (e.g., specific libraries, frameworks, or programming languages used for implementation).
Experiment Setup No The paper describes a theoretical algorithm and its analysis. It does not include details about an experimental setup, such as hyperparameters, training configurations, or system-level settings, because it does not report on empirical experiments.