Bayesian Optimization of Risk Measures

Authors: Sait Cakmak, Raul Astudillo Marban, Peter Frazier, Enlu Zhou

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of our approach in a variety of numerical experiments. and Section 6 presents numerical experiments demonstrating the performance of the algorithms developed here.
Researcher Affiliation Academia Sait Cakmak Georgia Institute of Technology scakmak3@gatech.edu Raul Astudillo Cornell University ra598@cornell.edu Peter Frazier Cornell University pf98@cornell.edu Enlu Zhou Georgia Institute of Technology enlu.zhou@isye.gatech.edu
Pseudocode No The paper describes algorithmic steps in prose but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code for our implementation of the algorithms and the experiments can be found at https://github.com/saitcakmak/Bo Risk.
Open Datasets No The paper uses data generated by simulators or synthetic functions for its experiments, rather than explicitly providing concrete access information (link, DOI, formal citation to a dataset resource) for a publicly available, pre-existing dataset.
Dataset Splits No The paper does not provide specific dataset split information (e.g., exact percentages or sample counts for training, validation, and test sets) for reproducing the data partitioning of any external datasets used.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions using software packages such as Bo Torch, CVXPortfolio, and the L-BFGS algorithm, but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes We optimize each acquisition function using the L-BFGS [48] algorithm with 10 (d X +d W) restart points. The restart points are selected from 500 (d X + d W) raw samples using a heuristic. For the inner optimization problem of ρKG, we use 5 d X random restarts with 25 d X raw samples. For both ρKG and ρKGapx, we use the two time scale optimization where we solve the inner optimization problem once every 10 optimization iterations. ρKG and ρKGapx are both estimated using K = 10 fantasy GP models, and M = 40 sample paths for each fantasy model. and We initialize each run of the benchmark algorithms with 2d X + 2 starting points from the X space...