OPT-GAN: A Broad-Spectrum Global Optimizer for Black-Box Problems by Learning Distribution

Authors: Minfang Lu, Shuai Ning, Shuangrong Liu, Fengyang Sun, Bo Zhang, Bo Yang, Lin Wang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on diverse BBO benchmarks and high dimensional real world applications exhibit that OPT-GAN outperforms other traditional and neural netbased BBO algorithms. The code and Appendix are available at https://github.com/NBICLAB/OPT-GAN
Researcher Affiliation Collaboration Minfang Lu1,5,*, Shuai Ning1,2,*, Shuangrong Liu3, Fengyang Sun4, Bo Zhang1,2, Bo Yang2, Lin Wang1, 1 Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan 250022, China 2 Quan Cheng Laboratory, Jinan 250100, China 3 Department of Computer Science, The University of Suwon, Hwaseong 18323, South Korea 4 Victoria University of Wellington, Wellington 6140, New Zealand 5 Cainiao Network, Hangzhou, China
Pseudocode Yes Algorithm 1: OPT-GAN (see Appendix Algorithm 2 for the full version)
Open Source Code Yes The code and Appendix are available at https://github.com/NBICLAB/OPT-GAN
Open Datasets Yes The baseline methods are tested on the challenging black-box benchmarks from the COCO platform (Hansen et al. 2021), CEC 19 Benchmark Suite (Price et al. 2018), Conformal Bent Cigar (Liu et al. 2020a), and Simulationlib (Surjanovic and Bingham 2021).
Dataset Splits No The paper mentions testing on benchmarks and performance comparisons but does not explicitly state specific train/validation/test splits by percentages or counts for reproducibility.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup Yes The detailed configurations about model specifications, benchmarks, and experimental settings are introduced in Appendix D.1, respectively. We also conducted the ablation analysis, convergence analysis, and hyperparameter analysis for OPT-GAN (see Appendix D.2, D.3, and D.4). Notation Description Table 1 lists various parameters (e.g., K, S, M, MAXFes, GANIter, DIter, β, λ, a, PreIter) which are detailed in the appendix.