Theoretical Analyses of Multi-Objective Evolutionary Algorithms on Multi-Modal Objectives
Authors: Benjamin Doerr, Weijie Zheng12293-12301
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We prove that the simple evolutionary multi-objective optimizer (SEMO) cannot compute the full Pareto front. In contrast, for all problem sizes n and all jump sizes k [4.. n 2 1], the global SEMO (GSEMO) covers the Pareto front in Θ((n 2k)nk) iterations in expectation. To improve the performance, we combine the GSEMO with two approaches, a heavy-tailed mutation operator and a stagnation detection strategy, that showed advantages in singleobjective multi-modal problems. Runtime improvements of asymptotic order at least kΩ(k) are shown for both strategies. Our experiments verify the substantial runtime gains already for moderate problem sizes. |
| Researcher Affiliation | Academia | 1Laboratoire d Informatique (LIX), Ecole Polytechnique, CNRS, Institut Polytechnique de Paris, Palaiseau, France 2 Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China |
| Pseudocode | Yes | Algorithm 1 SEMO, Algorithm 2 GSEMO, Algorithm 3 The GSEMO-HTM algorithm with power-law exponent β > 1, Algorithm 4 SD-GSEMO with safety parameter R |
| Open Source Code | No | The paper does not provide any statement or link indicating the release of open-source code for the described methodology. |
| Open Datasets | No | The paper defines a new theoretical problem ONEJUMPZEROJUMPn,k for analysis and experimentation, rather than using a publicly available dataset in the traditional sense. |
| Dataset Splits | No | The paper performs theoretical analyses and simulations on a defined function; it does not mention training/validation/test dataset splits as commonly used for machine learning models. |
| Hardware Specification | No | The paper describes its experimental settings for the defined problem but does not specify any hardware details like GPU/CPU models or memory used. |
| Software Dependencies | No | The paper does not list specific software components with version numbers needed to replicate the experiments. |
| Experiment Setup | Yes | Our experimental settings are the same for all algorithms. ONEJUMPZEROJUMPn,k: jump size k = 4 and problem size n = 10, 14, . . . , 50. β = 1.5 as suggested in (Doerr et al. 2017) for the powerlaw distribution in GSEMO-HTM. R = n for SD-GSEMO and SD-GSEMO-Ind as suggested in (Rajabi and Witt 2020). 20 independent runs for each setting. |