Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
A Survey of Methods for Automated Algorithm Configuration
Authors: Elias Schede, Jasmin Brandt, Alexander Tornede, Marcel Wever, Viktor Bengs, Eyke Hüllermeier, Kevin Tierney
JAIR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper is a survey of methods for Automated Algorithm Configuration. The abstract states: "we introduce taxonomies to describe the AC problem and features of configuration methods, respectively. We review existing AC literature within the lens of our taxonomies, outline relevant design choices of configuration approaches, contrast methods and problem variants against each other, and describe the state of AC in industry. Finally, our review provides researchers and practitioners with a look at future research directions in the field of AC." The paper synthesizes and analyzes existing research rather than conducting new empirical studies. |
| Researcher Affiliation | Academia | All listed authors are affiliated with universities: Bielefeld University, Paderborn University, and LMU Munich. Email domains like @uni-bielefeld.de, @upb.de, and @ifi.lmu.de further confirm academic affiliations. |
| Pseudocode | Yes | The paper contains a clearly labeled algorithm block: "Algorithm 1 SMBO(configuration space Θ, initial design, black box function g, surrogate model bg, acquisition function a)" on page 11. |
| Open Source Code | Yes | The paper provides a list of software resources in Table 7, including specific GitHub links and other URLs for various AC systems and benchmarks, such as "D-SMAC https://github.com/tqichun/distributed-SMAC3" and "irace https://github.com/MLopez-Ibanez/irace". |
| Open Datasets | Yes | The paper lists "AClib https://bitbucket.org/mlindauer/aclib2/src/master/" under "Benchmarks" in Table 7, which is a benchmark library commonly used for algorithm configuration, implying access to datasets. |
| Dataset Splits | No | The paper is a survey and does not conduct its own experiments; therefore, it does not define or provide specific training/test/validation dataset splits. It discusses general concepts of training instances in the context of surveyed methods but offers no specific splits for its own work. |
| Hardware Specification | No | As a survey paper, the authors do not describe any specific hardware (like GPU/CPU models, memory, or cloud instances) used for running their own experiments. The paper focuses on reviewing methodologies rather than presenting new experimental results that require hardware specification. |
| Software Dependencies | No | The paper lists various software tools in Table 7 as examples of available resources in the field of AC. However, it does not specify any ancillary software dependencies with version numbers (e.g., Python, PyTorch) that were used by the authors to produce the survey or that are required to replicate any experiments conducted by them. |
| Experiment Setup | No | As a survey paper, the authors do not provide details about their own experimental setup, hyperparameters, model initialization, or training schedules. The paper focuses on reviewing and classifying existing methods, not on presenting new experimental results. |