Fully Distributed Bayesian Optimization with Stochastic Policies
Authors: Javier Garcia-Barcos, Ruben Martinez-Cantin
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present results in several benchmarks and applications that show the performance of our method. We present results in several benchmarks and applications that show the performance of our method. Section 5 'Performance Analysis' clearly details experiments on benchmark functions, robot pushing, and hyperparameter tuning of neural networks. Figures 3, 4, 5, 6 show experimental results (regret, best function value over function evaluations). |
| Researcher Affiliation | Academia | Javier Garcia-Barcos1 and Ruben Martinez-Cantin1,2 1Instituto de Investigacion en Ingenieria de Aragon, University of Zaragoza 2Centro Universitario de la Defensa, Zaragoza jgbarcos, rmcantin@unizar.es |
| Pseudocode | Yes | Algorithm 1 summarizes the code to be deployed in each node of the computing cluster or distributed system. Algorithm 1 BO-NODE |
| Open Source Code | No | The paper does not provide an explicit statement about releasing code for the work described in this paper, nor does it include a direct link to a source-code repository for their methodology. It mentions using 'code from [Wang and Jegelka, 2017]' but this refers to a third-party tool, not their own implementation. |
| Open Datasets | Yes | Variational Autoencoder (VAE) on MNIST. We train a VAE for the MNIST dataset. Feedforward Network on Boston Housing. We fit a single layer feedforward network on the Boston housing dataset. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, and test sets. It mentions 'average of each method over 10 trials' but this is not about data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. It refers to 'high-throughput computing facilities' and 'distributed system' in general terms. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., Python 3.8, PyTorch 1.9), which are needed to replicate the experiment. |
| Experiment Setup | Yes | In all the experiments, we assume a network of 10 nodes synchronized, that is, function evaluations are performed in batches of 10 for all distributed methods. For all the plots, we display the average of each method over 10 trials with a 95% confidence interval. The optimization is initialized with p evaluations by sampling from low discrepancy sequences. VAE: number of nodes in the hidden layer, learning rate, learning rate decay and ϵ constant for the ADAM optimizer. Feedforward Network: number of nodes in the hidden layer, learning rate, learning rate decay and ρ parameter for the exponential decay rate from RMSprop. |