Discretely beyond $1/e$: Guided Combinatorial Algortihms for Submodular Maximization
Authors: Yixin Chen, Ankur Nath, Chunli Peng, Alan Kuhnle
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, along with Appendix F, we implement and empirically evaluate our randomized (0.385 ε)-approximation algorithm (Alg. 2, FASTLS+GUIDEDRG) on two applications of sizeconstrained SM, and compare to several baselines in terms of objective value of solution and number of queries to f. |
| Researcher Affiliation | Academia | Yixin Chen, Ankur Nath, Chunli Peng, Alan Kuhnle Department of Computer Science & Engineering Texas A&M University Colloge Station, TX {chen777, anath, chunli.peng, kuhnle}@tamu.edu |
| Pseudocode | Yes | Algorithm 1: Buchbinder et al. [9] (...) Algorithm 2: Randomized combinatorial approximation algorithm. |
| Open Source Code | Yes | Our code is available at https://gitlab.com/luciacyx/guided-rg.git. |
| Open Datasets | Yes | Data and code are provided in the supplementary material to regenerate the empirical results provided in the paper. |
| Dataset Splits | No | The paper does not explicitly mention training/validation/test splits in the traditional machine learning sense, as no model training is performed. It evaluates algorithms on problem instances (graphs, video frames) where the full dataset serves as the testbed. |
| Hardware Specification | Yes | We run all experiments on an Intel Xeon(R) W5-2445 CPU at 3.10 GHz with 20 cores, 64 GB of memory, and one NVIDIA RTX A4000 with 16 GB of memory. |
| Software Dependencies | No | The paper mentions 'standard multiprocessing provided in Python' but does not specify exact version numbers for Python or any other libraries or frameworks. |
| Experiment Setup | Yes | For all experiments, we set the error rate, ϵ, to 0.01 for FASTLS +GUIDEDRG and to 0.1 for Lee et al. [21]. |