Bayesian Optimization under Heavy-tailed Payoffs
Authors: Sayak Ray Chowdhury, Aditya Gopalan
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we report numerical results based on experiments on synthetic as well as real-world based datasets, for which the algorithms we develop are seen to perform favorably in the harsher heavy-tailed environments. |
| Researcher Affiliation | Academia | Sayak Ray Chowdhury Department of ECE Indian Institute of Science Bangalore, India 560012 sayak@iisc.ac.in Aditya Gopalan Department of ECE Indian Institute of Science Bangalore, India 560012 aditya@iisc.ac.in |
| Pseudocode | Yes | Algorithm 1 Truncated GP-UCB (TGP-UCB) and Algorithm 2 Adaptively Truncated Approximate GP-UCB (ATA-GP-UCB) |
| Open Source Code | Yes | (Our codes are available here.) |
| Open Datasets | No | The paper describes synthetic data generated by the authors, stock market data collected by the authors from a specific time period, and 'light sensor data collected in the CMU Intelligent Workplace in Nov 2005'. However, no specific links, DOIs, repositories, or formal citations are provided for any of these datasets to confirm their public availability. |
| Dataset Splits | No | The paper mentions '601 train samples and 192 test samples' for the light sensor data, but it does not provide explicit details about train/validation/test splits (e.g., percentages, sample counts for validation, or specific splitting methodology) for all experiments. |
| Hardware Specification | No | The paper describes the datasets and experimental setup, but it does not provide specific details about the hardware (e.g., CPU, GPU models, or cloud instances) used to run the experiments. |
| Software Dependencies | No | The paper mentions various algorithms and approximation techniques but does not specify any software dependencies with version numbers (e.g., Python version, specific library versions like PyTorch or TensorFlow). |
| Experiment Setup | Yes | The confidence width βt and truncation level bt of our algorithms, and the trade-off parameter q used in Nyström approximation are set order-wise similar to those recommended by theory (Theorems 1, 3 and 4). We use λ = 1 in all algorithms and ε = 0.1 in ATA-GP-UCB-Nyström. We use m = 32 features (in consistence with Theorem 3) for ATA-GP-UCB-QFF in these experiments. |