Towards a White Box Approach to Automated Algorithm Design

Authors: Steven Adriaensen, Ann Nowé

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we illustrate some of the benefits of white box evaluation. To this purpose, we compare the performance of the implementation described in Section 5 to that of a similar black box implementation on two micro-benchmarks. These implementations differ in that the white box optimizer (WB) maintains transition data (n,r) and returns , while the black box optimizer (BB) maintains a c ! f(c) mapping and returns c = arg maxc0 f(c0). Figures 2 and 4 show the performance of the algorithm returned by each optimizer, after x algorithm evaluations,2 averaged over 100 independent meta-optimization runs.
Researcher Affiliation Academia Steven Adriaensen, Ann Nowe Vrije Universiteit Brussel Pleinlaan 2, 1050 Elsene, Belgium {steven.adriaensen, ann.nowe}@vub.ac.be
Pseudocode Yes Figure 1: Code for Benchmark 1
Open Source Code Yes We have implemented our optimizer as a standalone Java Library.1 https://github.com/Steven-Adriaensen/White-box-ADP
Open Datasets No The paper introduces two 'micro-benchmarks' defined by code snippets within the paper (Figure 1 and Figure 3) rather than using external, publicly available datasets with specific access information or standard citations.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, or citations to predefined splits) for training, validation, or testing. It refers to 'a given input (with variable seed)' for evaluations but not formal splits.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions that the optimizer is implemented as a 'standalone Java Library' but does not specify a version number for Java or any other software dependencies with their respective versions.
Experiment Setup No The paper describes the general approach of the solver and the agents used (URS, PURS, GR) but does not provide specific experimental setup details such as hyperparameters (e.g., learning rates, batch sizes, specific numerical settings for the optimizer) typically found in reproducible experimental setups.