Global optimization of Lipschitz functions
Authors: Cédric Malherbe, Nicolas Vayatis
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | a numerical assessment is provided at the end of the paper to illustrate the potential of this strategy with respect to state-of-the-art methods over typical benchmark problems for global optimization. In Section 5, Experiments are conducted to compare the empirical performance of Ada LIPO with five state-of-the-art global optimization methods on various datasets and synthetic problems. |
| Researcher Affiliation | Academia | 1CMLA, ENS Cachan, CNRS, Universit e Paris-Saclay, 94235, Cachan, France. |
| Pseudocode | Yes | Algorithm 1 LIPO(n, k, X, f) and Algorithm 2 ADALIPO(n, p, ki Z, X, f) provide structured pseudocode for the proposed algorithms. |
| Open Source Code | No | The paper does not explicitly state that the source code for LIPO or Ada LIPO is open-sourced or provide a link to it. It only mentions the use of third-party libraries for comparison algorithms: In Python 2.7 from Bayes Opt (Martinez-Cantin, 2014), CMA 1.1.06 (Hansen, 2011) and NLOpt (Johnson, 2014). |
| Open Datasets | Yes | We first studied the task of estimating the regularization parameter λ and the bandwidth σ of a gaussian kernel ridge regression minimizing the empirical mean squared error of the predictions over a 10-fold cross validation with real data sets. The optimization was performed over (ln(λ), ln(σ)) [ 3, 5] [ 2, 2] with five data sets from the UCI Machine Learning Repository (Lichman, 2013): Auto-MPG, Breast Cancer Wisconsin (Prognostic), Concrete slump test, Housing and Yacht Hydrodynamics. We then compared the algorithms on a series of five synthetic problems commonly met in standard optimization benchmark taken from (Jamil & Yang, 2013; Surjanovic & Bingham, 2013): Holder Table, Rosenbrock, Sphere, Linear Slope and Deb N.1. |
| Dataset Splits | Yes | minimizing the empirical mean squared error of the predictions over a 10-fold cross validation with real data sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory amounts) used for running its experiments. |
| Software Dependencies | Yes | In Python 2.7 from Bayes Opt (Martinez-Cantin, 2014), CMA 1.1.06 (Hansen, 2011) and NLOpt (Johnson, 2014). |
| Experiment Setup | Yes | For a fair comparison, the tuning parameters were all set to default and Ada LIPO was constantly used with a parameter p set to 0.1 and a sequence ki = (1 + 0.01/d)i fixed by an arbitrary rule of thumb. |