Multi-Fidelity Automatic Hyper-Parameter Tuning via Transfer Series Expansion

Authors: Yi-Qi Hu, Yang Yu, Wei-Wei Tu, Qiang Yang, Yuqiang Chen, Wenyuan Dai3846-3853

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on real-world Auto ML problems verify that the proposed framework can accelerate derivative-free configuration search significantly by making use of the multi-fidelity evaluations.
Researcher Affiliation Collaboration 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China 24Paradigm Inc., Beijing, China 3Hong Kong University of Science and Technology, Hong Kong, China
Pseudocode Yes Algorithm 1 Multi-Fidelity Optimization Framework and Algorithm 2 Multi-Fidelity optimization with TSE
Open Source Code No The paper mentions 'some open-source tools' related to Auto ML, but does not provide any statement or link for the source code of the methodology described in this paper.
Open Datasets Yes The details of datasets are showed in Table 1. Some of datasets such as Musk, HTRU2, Magic04, Adult, Sensorless, Connect and Higgs are benchmark datasets from UCI. Rest of them such as Credit, Miniboone, Airline, Movie Lens, Criteo come from machine learning competition.
Dataset Splits Yes The validation datasets are constructed by sampling 10% instances from Dtrain.
Hardware Specification No No specific hardware details (e.g., CPU/GPU models, memory) used for running experiments were provided.
Software Dependencies No The paper mentions 'Light GBM (Ke et al. 2017)', 'SRACOS (Yu, Qian, and Hu 2016; Hu, Qian, and Yu 2017)', and 'random forest regressor' but no specific version numbers for any software dependencies.
Experiment Setup Yes For multi-fidelity optimization methods, we get one high-fidelity evaluation for every 100 low-fidelity evaluations. That is to say, TL = 100. The total high-fidelity evaluation budget is 50 (TH = 50) in optimization. Thus, there are all 5000 low-fidelity evaluations and 50 high-fidelity evaluations on a multi-fidelity optimization process.