Practical Bayesian Optimization for Model Fitting with Bayesian Adaptive Direct Search

Authors: Luigi Acerbi, Wei Ji Ma

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We then perform an extensive benchmark of BADS vs. many common and state-of-the-art nonconvex, derivativefree optimizers, on a set of model-fitting problems with real data and models from six studies in behavioral, cognitive, and computational neuroscience. With default settings, BADS consistently finds comparable or better solutions than other methods, including vanilla BO, showing great promise for advanced BO techniques, and BADS in particular, as a general model-fitting tool.
Researcher Affiliation Academia Luigi Acerbi Center for Neural Science New York University luigi.acerbi@nyu.edu Wei Ji Ma Center for Neural Science & Dept. of Psychology New York University weijima@nyu.edu
Pseudocode Yes Algorithm 1 Bayesian Adaptive Direct Search
Open Source Code Yes Code available at https://github.com/lacerbi/bads.
Open Datasets Yes Second, we collected model-fitting problems from six published or ongoing studies in cognitive and computational neuroscience (CCN17). The objectives of the CCN17 set are negative log likelihood functions of an input parameter vector, for specified datasets and models, and can be deterministic or stochastic. For each study in the CCN17 set we asked its authors for six different real datasets (i.e., subjects or neurons), divided between one or two main models of interest; collecting a total of 36 test functions with D {6, 9, 10, 12, 13}.
Dataset Splits No The paper does not specify standard train/validation/test splits of the datasets used for the experiments. It focuses on optimizer performance in model fitting rather than the typical machine learning evaluation of a model's generalization performance on validation/test sets.
Hardware Specification No This work has utilized the NYU IT High Performance Computing resources and services. This statement is too general and does not provide specific hardware details such as CPU/GPU models, memory, or specific cloud instance types.
Software Dependencies Yes We tested BADS and many optimizers with implementation available in MATLAB (R2015b, R2017a)
Experiment Setup Yes For all algorithms, including BADS, we used default settings (no fine-tuning). We ran 50 independent runs of each algorithm on each test function, with randomized starting points and a budget of 500 D function evaluations (200 D for noisy problems). We use the recommended values ν = 0.2 and δ = 0.1 [17]. We initialize poll 0 = 1 and mesh 0 = 2 10 (in standardized space)... We use τ = 2... We apply an aggressive, repeated SEARCH strategy that consists of up to nsearch = max{D, 3 + D/2 } unsuccessful SEARCH steps.