Quality-Diversity Algorithms Can Provably Be Helpful for Optimization
Authors: Chao Qian, Ke Xue, Ren-Jian Wang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we try to shed some light on the optimization ability of QD algorithms via rigorous runtime analysis. By comparing the popular QD algorithm MAP-Elites with (µ + 1)-EA (a typical EA focusing on finding better objective values only), we prove that on two NP-hard problem classes with wide applications, i.e., monotone approximately submodular maximization with a size constraint, and set cover, MAP-Elites can achieve the (asymptotically) optimal polynomial-time approximation ratio, while (µ + 1)-EA requires exponential expected time on some instances. This provides theoretical justification for that QD algorithms can be helpful for optimization, and discloses that the simultaneous search for high-performing solutions with diverse behaviors can provide stepping stones to good overall solutions and help avoid local optima. |
| Researcher Affiliation | Academia | Chao Qian , Ke Xue and Ren-Jian Wang National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China School of Artificial Intelligence, Nanjing University, Nanjing 210023, China {qianc, xuek, wangrj}@lamda.nju.edu.cn |
| Pseudocode | Yes | Algorithm 1 MAP-Elites Algorithm and Algorithm 2 (µ + 1)-EA |
| Open Source Code | No | The paper only states: 'The supplementary material is available at https://arxiv.org/abs/2401.10539.' This does not explicitly confirm that source code for the described methodology is provided at this link, nor does it contain other clear statements about code release. |
| Open Datasets | No | The paper is theoretical, providing proofs and runtime analyses for algorithms on problem classes (e.g., monotone approximately submodular maximization, set cover) and specific theoretical instances (e.g., Example 1, Example 2), rather than conducting experiments on empirical datasets. Therefore, no training data is used or made publicly available. |
| Dataset Splits | No | This paper conducts theoretical runtime analysis and provides proofs for algorithms. It does not involve empirical experiments with datasets, and thus no dataset splits for training, validation, or testing are specified. |
| Hardware Specification | No | This paper focuses on theoretical analysis and proofs of algorithms. It does not describe any empirical experiments that would require specific hardware specifications. |
| Software Dependencies | No | The paper presents theoretical runtime analysis and proofs for algorithms; it does not describe empirical experiments or require specific software dependencies with version numbers for reproducibility. |
| Experiment Setup | No | The paper provides theoretical analysis and proofs of algorithms, and therefore does not include details on experimental setup such as hyperparameters or system-level training settings. |