Improved Evolutionary Algorithms for Submodular Maximization with Cost Constraints
Authors: Yanhui Zhu, Samik Basu, A. Pavan
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, the empirical evaluations carried out through extensive experimentation substantiate the efficiency and effectiveness of our proposed algorithms. Our algorithms consistently outperform existing methods, producing higher-quality solutions. |
| Researcher Affiliation | Academia | Yanhui Zhu , Samik Basu , A. Pavan Department of Computer Science, Iowa State University, Ames, IA, USA yanhui@iastate.edu, {sbasu, pavan}@cs.iastate.edu |
| Pseudocode | Yes | Algorithm 1: EVO-SMC and Algorithm 2: ST-EVO-SMC |
| Open Source Code | Yes | We implement our algorithms and baselines in C++ (https://github.com/yz24/evo-SMC). |
| Open Datasets | Yes | In our experiments, we use Facebook [Leskovec and Mcauley, 2012] and Film-Trust networks [Kunegis, 2013]... We use Protein network [Stelzl et al., 2005] and Eu-Email network [Leskovec et al., 2007]... We use a real-world air quality data (light and temperature measures) [Zheng et al., 2013] |
| Dataset Splits | No | The paper runs algorithms multiple times and reports medians, but does not provide specific train/validation/test splits, sample counts, or cross-validation details for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper states 'We implement our algorithms and baselines in C++', but does not provide specific version numbers for C++ compiler, libraries, or any other software dependencies. |
| Experiment Setup | No | The paper describes the applications and general settings (e.g., budget β, cost penalty q, number of runs), but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size), optimizer settings, or detailed training configurations. |