Multi-Fidelity Black-Box Optimization with Hierarchical Partitions
Authors: Rajat Sen, Kirthevasan Kandasamy, Sanjay Shakkottai
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We develop tree-search based multi-fidelity algorithms with theoretical guarantees on simple regret. We finally demonstrate the performance gains of our algorithms on both real and synthetic datasets. |
| Researcher Affiliation | Academia | 1Univerity of Texas as Austin 2Carnegie Mellon University. |
| Pseudocode | Yes | Algorithm 1 MFDOO: Multi-Fidelity Deterministic Optimistic Optimization. Algorithm 2 MFPDOO: Multi-Fidelity Parallel Deterministic Optimistic Optimization. |
| Open Source Code | Yes | Our implementation can be found at https://github.com/rajatsen91/MFTREE DET. |
| Open Datasets | Yes | For this purpose we use a subset of the 20 news group dataset (Joachims, 1996). |
| Dataset Splits | Yes | The results are averaged over 10 experiments and the corresponding error bars are shown. We use a one-dimensional fidelity space, where the fidelity denotes the number of samples used to obtain 5-fold cross-validation accuracy. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'the scikit-learn implementation of SVM classifier' but does not specify any version numbers for scikit-learn or other software dependencies, which would be necessary for reproducibility. |
| Experiment Setup | Yes | The bias function is assumed to be known. However, in practice we assume a parametric form for the bias function that is (z) = c(1 z) where c is initially set to a very small constant like 0.001 in our experiments. We set K = 2 in all our experiments. In our algorithm we set the number of MFDOO instances spawned to be N = 0.1Dmax log( /λ(1)), given a total budget . We set max = 0.95 and max = 2.0. For our algorithms we set max = 1.0 and max = 0.9. |