Learning-Augmented Dynamic Submodular Maximization
Authors: Arpit Agarwal, Eric Balkanski
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 7 Experiments |
| Researcher Affiliation | Academia | Arpit Agarwal Indian Institute of Technology Bombay aarpit@iitb.ac.in Eric Balkanski Columbia University eb3224@columbia.edu |
| Pseudocode | Yes | Algorithm 1 The Algorithmic Framework; Algorithm 2 WARMUP-UPDATESOL; Algorithm 3 UPDATESOLMAIN; Algorithm 4 PRECOMPUTATIONSMAIN; Algorithm 5 ROBUST1FROMDYNAMIC; Algorithm 6 PRECOMPUTATIONSFULL; Algorithm 7 UPDATESOLFULL |
| Open Source Code | No | We are not submitting the code because one of the libraries we extensively use/modify requires several conditions for distributing derivatives of their library. We did not have time to satisfy all these conditions before the deadline. However, we have carefully read these conditions and will definitely be able to meet these conditions before the potential camera-ready deadline, at which point we would release our code. |
| Open Datasets | Yes | We perform experiments on a subset of the Enron dataset from the SNAP Large Networks Data Collection [21]. |
| Dataset Splits | No | The paper describes a sliding window protocol for generating a dynamic stream of insertions and deletions and sets parameters like window size, but it does not specify explicit training, validation, or test dataset splits in terms of percentages or sample counts. |
| Hardware Specification | No | we only compute small scale experiments which can be run within a few minutes on a CPU. |
| Software Dependencies | No | We implemented our algorithm and OFFLINEGREEDY in C++, and used the C++ implementation of DYNAMIC that is provided by [20]. No specific version numbers for C++ or related libraries are mentioned. |
| Experiment Setup | Yes | We set ϵ = 0.2 for all algorithms. |