Targeted Hyperparameter Optimization with Lexicographic Preferences Over Multiple Objectives

Authors: Shaokun Zhang, Feiran Jia, Chi Wang, Qingyun Wu

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform extensive empirical evaluation on four different machine learning model tuning tasks
Researcher Affiliation Collaboration Shaokun Zhang1, Feiran Jia1, Chi Wang2, Qingyun Wu1 1 Pennsylvania State University, State College, PA, USA 2 Microsoft Research, Redmond, Washington, USA
Pseudocode Yes Algorithm 1: Lexi Flow
Open Source Code Yes The implementation of our method is available in the opensource Auto ML library FLAML1. 1Link to the documentation page of Lexi Flow in FLMAL: https://microsoft.github.io/ FLAML/docs/Use-Cases/Tune-User-Defined-Function#lexicographic-objectives. code example demonstrating the use of Lexi Flow to find accurate and fast neural networks: https: //microsoft.github.io/FLAML/docs/Examples/Tune-Lexicographic-objectives.
Open Datasets Yes All datasets used in our experiment are available in Open ML.
Dataset Splits Yes Table 6: Date statistics information # of train instance # of val instance
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory amounts) used for running experiments were provided.
Software Dependencies No The paper mentions software libraries like FLAML, Optuna, and Scikit-learn, but does not provide specific version numbers for these or other ancillary software components used in the experiments.
Experiment Setup Yes The detailed search space in tuning Neural Networks, Random Forest, and XGboost are shown in Table 3, Table 4 and Table 5, respectively.