Deep Ranking Ensembles for Hyperparameter Optimization

Authors: Abdus Salam Khazi, Sebastian Pineda Arango, Josif Grabocka

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In a large-scale experimental protocol comprising 12 baselines, 16 HPO search spaces and 86 datasets/tasks, we demonstrate that our method achieves new state-of-the-art results in HPO.
Researcher Affiliation Academia Abdus Salam Khazi , Sebastian Pineda Arango , Josif Grabocka University of Freiburg
Pseudocode Yes Algorithm 1: Meta-learning the Deep Ranking Ensembles
Open Source Code Yes Our code is available in the following repository: https://github.com/releaunifreiburg/ Deep Ranking Ensembles
Open Datasets Yes We base our experiments on HPO-B (Pineda Arango et al., 2021), the largest public benchmark for HPO.
Dataset Splits Yes It contains 16 search spaces, each of which comprises a meta-train, meta-test, and meta-validation split.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. It only mentions computational cost in terms of time.
Software Dependencies No The paper mentions using the Adam optimizer and neural networks, and refers to the Deep Set architecture from Jomaa et al. (2021a), but it does not provide specific version numbers for any key software components or libraries (e.g., Python, PyTorch/TensorFlow, specific libraries with versions).
Experiment Setup Yes The ensemble of scorers is composed of 10 MLPs with identical architectures: four layers and 32 neurons... We meta-learn DRE for 5000 epochs with Adam optimizer, learning rate 0.001 and batch size 100. ... During meta-test in every BO iteration, we update the pre-trained weights for 1000 epochs. For DRE-RI, we initialize randomly the scorers and train them for 1000 epochs using Adam Optimizer with a learning rate of 0.02.