Meta-Learning Hypothesis Spaces for Sequential Decision-making

Authors: Parnian Kassraie, Jonas Rothfuss, Andreas Krause

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.In this section, we provide experiments to quantitatively illustrate our theoretical contribution.
Researcher Affiliation Academia 1ETH Zurich, Switzerland. Correspondence to: Parnian Kassraie <pkassraie@ethz.ch>.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions data from the Open ML platform and the use of the CELER solver, but does not provide a link to the open-source code for the META-KEL methodology described.
Open Datasets Yes The Open ML platform (Bischl et al., 2017) enables access to data from hyper-parameter tuning of GLMNET on 38 different classification tasks. The hyper-parameter evaluations are available under a Creative Commons BY 4.0 license and can be downloaded here4.
Dataset Splits Yes We randomly split the available tasks (i.e. train/test evaluations on a specific dataset) into a set of meta-train and meta-test tasks. We split these datasets into a meta-dataset with m = 25 and leave the rest as test tasks.
Hardware Specification No The paper does not specify the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using 'CELER, a fast solver for the group Lasso (Massias et al., 2018)' but does not provide a specific version number for it or other software dependencies.
Experiment Setup Yes We set p = 20 and s = |Jk | = 5. ... We add Gaussian noise with standard deviation σ = 0.01 to all data points. ... For all experiments we set n = m = 50 unless stated otherwise. ... We set λ = 0.03, such that it satisfies the condition of Theorem 4.3.