Runtime Analysis for the NSGA-II: Provable Speed-Ups from Crossover

Authors: Benjamin Doerr, Zhongdi Qu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Very recently, the first mathematical runtime analyses for the NSGA-II, the most common multi-objective evolutionary algorithm, have been conducted. Continuing this research direction, we prove that the NSGA-II optimizes the One Jump Zero Jump benchmark asymptotically faster when crossover is employed. Together with a parallel independent work by Dang, Opris, Salehi, and Sudholt, this is the first time such an advantage of crossover is proven for the NSGAII. Our arguments can be transferred to single-objective optimization. They then prove that crossover can speed up the (ยต + 1) genetic algorithm in a different way and more pronounced than known before. Our experiments confirm the added value of crossover and show that the observed advantages are even larger than what our proofs can guarantee.
Researcher Affiliation Academia Laboratoire d Informatique (LIX), Ecole Polytechnique, CNRS, Institut Polytechnique de Paris, Palaiseau, France
Pseudocode No The paper describes the algorithms in text, but does not provide structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include any statement or link providing concrete access to source code for the methodology described.
Open Datasets Yes To this aim, we regard the ONEJUMPZEROJUMP benchmark (Doerr and Zheng 2021), which is a bi-objective version of the classic JUMP benchmark intensively studied in the analysis of single-objective search heuristics.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) typical for training/validation/test sets in machine learning. It describes experimental runs on benchmark problems.
Hardware Specification No The paper states that the algorithms were implemented in Python but does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions 'implemented the algorithm as described in the preliminaries section in Python' but does not provide specific ancillary software details (e.g., library or solver names with version numbers).
Experiment Setup Yes Settings We implemented the algorithm as described in the preliminaries section in Python. We regarded the problem sizes n = 50 and n = 100 and the jump size k = 2. We used the population sizes N = 2(n 2k + 3) and N = 4(n 2k+3) and employed fair selection for variation. With probability pc, uniform crossover is applied followed by bit-wise mutation, otherwise only mutation is performed. We regarded the crossover rates pc = 0 (no crossover) and pc = 0.9. We conducted 10 independent repetitions per setting.