Machine Learning for Online Algorithm Selection under Censored Feedback
Authors: Alexander Tornede, Viktor Bengs, Eyke Hüllermeier10370-10380
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In an extensive experimental evaluation on an adapted version of the ASlib benchmark, we demonstrate that theoretically well-founded methods based on Thompson sampling perform specifically strong and improve in comparison to existing methods. |
| Researcher Affiliation | Academia | 1Department of Computer Science, Paderborn University 2Institute for Informatics, LMU Munich |
| Pseudocode | Yes | Alg. 1 provides the pseudo code for this revisited Thompson algorithm and a variant inspired by the Buckley-James estimate we discuss in the following. |
| Open Source Code | Yes | All code including detailed documentation and the appendix itself can be found on on Git Hub2. https://github.com/alexandertornede/online_as |
| Open Datasets | Yes | We base our evaluation on the standard algorithm selection benchmark library ASlib (v4.0) (Bischl et al. 2016) |
| Dataset Splits | No | Since ASlib was originally designed for offline AS, we do not use the train/test splits provided by the benchmark, but rather pass each instance one by one to the correspond-ing online approaches, ask them to select an algorithm and return the corresponding feedback. |
| Hardware Specification | Yes | All experiments were run on machines featuring Intel Xeon E5-2695v4@2.1GHz CPUs with 16 cores and 64GB RAM |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | The corresponding hyperparameter settings used for the experiments can be found in Section F of the appendix and in the repository, parameter sensitivity analyses in Section G. |