DACBench: A Benchmark Library for Dynamic Algorithm Configuration
Authors: Theresa Eimer, André Biedenkapp, Maximilian Reimer, Steven Adriansen, Frank Hutter, Marius Lindauer
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To show the potential, broad applicability and challenges of DAC, we explore how a set of six initial benchmarks compare in several dimensions of difficulty. In order to study our benchmarks, we discuss dimensions of difficulty which are relevant to the DAC setting. To provide insights into how our benchmarks behave in these dimensions, we use static policies, known dynamic baselines and random dynamic policies to explore their unique challenges. |
| Researcher Affiliation | Collaboration | Theresa Eimer1 , Andr e Biedenkapp2 , Maximilian Reimer1 , Steven Adriaensen2 , Frank Hutter2,3 and Marius Lindauer1 1Information Processing Institute (tnt), Leibniz University Hannover, Germany 2Department of Computer Science, University of Freiburg, Germany 3Bosch Center for Artificial Intelligence, Renningen, Germany |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1The project repository can be found at https://github.com/automl/DACBench |
| Open Datasets | Yes | Furthermore DACBench is designed to build upon existing benchmark libraries in target domains by integrating their algorithm implementations. This includes well-established benchmarks like COCO [Hansen et al., 2020] or IOHProfiler [Doerr et al., 2018]. CMA-ES [Hansen et al., 2003] is an evolutionary strategy, where the DAC task is to adapt the algorithm s steps size [Shala et al., 2020] when solving BBOB functions. |
| Dataset Splits | No | The paper states that "To assess generalization performance, a training and test set of instances is required." but does not provide specific percentages, sample counts, or explicit details about the train/validation/test splits used for their experiments. |
| Hardware Specification | Yes | All experiments in this paper were conducted on a single machine with an AMD Ryzen 7 3700X 8-Core Processor and 32GB of RAM. |
| Software Dependencies | No | The paper mentions software components implicitly (e.g., DACBench, Open AI Gym API, algorithms like CMA-ES) but does not provide specific version numbers for any libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | All of them were run for 10 seeds with at most 1 000 steps on each instance. For benchmarks with a discrete action space, static policies cover all the actions. The two benchmarks with continuous action spaces, CMA-ES and SGD-DL were run with 50 static actions each, distributed uniformly over the action space. |