Dynamic Non-monotone Submodular Maximization
Authors: Kiarash Banihashem, Leyla Biabani, Samira Goudarzi, MohammadTaghi Hajiaghayi, Peyman Jabbarzade, Morteza Monemizadeh
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we empirically study our (8+ε)-approximation dynamic algorithm. We implement our codes in C++ and run them on a Mac Book laptop with 8 GB RAM and M1 processor. We empirically study the performance of our algorithm for video summarization and the Max-Cut problem. We run our experiments on You Tube and Open Video Project (OVP) datasets from [20]. |
| Researcher Affiliation | Academia | Kiarash Banihashem kiarash@umd.edu University of Maryland Leyla Biabani l.biabani@tue.nl TU Eindhoven Samira Goudarzi samirag@umd.edu University of Maryland Mohammad Taghi Hajiaghayi hajiagha@cs.umd.edu University of Maryland Peyman Jabbarzade peymanj@umd.edu University of Maryland Morteza Monemizadeh m.monemizadeh@tue.nl TU Eindhoven |
| Pseudocode | Yes | The pseudocode of our algorithm is given in Algorithm 1, Algorithm 2, and Algorithm 3. |
| Open Source Code | No | The paper does not contain an explicit statement about open-sourcing the code for the methodology, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | We run our experiments on You Tube and Open Video Project (OVP) datasets from [20]. |
| Dataset Splits | No | The paper describes a 'sliding window model' for updates and compares algorithm performance, but it does not specify explicit training, validation, or test dataset splits (e.g., percentages or fixed sets) for reproducibility. |
| Hardware Specification | Yes | We implement our codes in C++ and run them on a Mac Book laptop with 8 GB RAM and M1 processor. |
| Software Dependencies | No | The paper mentions that the code is implemented in 'C++', but it does not provide specific version numbers for compilers, libraries, or any other software dependencies needed to replicate the experiments. |
| Experiment Setup | No | The paper mentions setting `k` for the cardinality constraint and `ε` for approximation, and discusses the dynamic setup (sliding window model). However, it does not provide specific details such as learning rates, batch sizes, optimizer choices, or other hyperparameters typically found in experimental setup descriptions for deep learning or complex optimization models. |