Structured Variationally Auto-encoded Optimization

Authors: Xiaoyu Lu, Javier Gonzalez, Zhenwen Dai, Neil D. Lawrence

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Several experiments in various application domains are used to illustrate the utility and generality of the approach described in this work. Section 3, we illustrate its performance with a series of experimental results.
Researcher Affiliation Collaboration Xiaoyu Lu 1 Javier González 2 Zhenwen Dai 2 Neil D. Lawrence 2 3 1Department of Statistics, University of Oxford, United Kingdom 2Amazon, Cambridge, United Kingdom 3University of Sheffield,United Kingdom.
Pseudocode Yes Algorithm 1 Context-free grammar for kernel expressions generation. and Algorithm 3 Structured Variational auto-encoded optimization, the SVO algorithm.
Open Source Code Yes An implementation of the proposed approach that can be found at the GPy Opt library https://github.com/Sheffield ML/GPy Opt together with the scripts to reproduce the results of this work.
Open Datasets Yes The Airline dataset described in previous section and the Barley and Concrete2. The concrete dataset has dimension 8, while the rest are unidimensional. ... All datasets are available at the UCI repository: https://archive.ics.uci.edu/ml/index.php exception the Barley, which can be found at https://datamarket.com
Dataset Splits Yes We apply Algorithm 3 using the first 10%, 37% and 63% of the data. The rest is used for testing.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or specific cloud instance types used for running experiments.
Software Dependencies No The paper mentions software like the 'GPy Opt library' but does not provide specific version numbers for any software dependencies required to replicate the experiment.
Experiment Setup Yes We use uniform prior probabilities in both the grammar operations and the base kernels. We allow a maximum of 5 operations in the grammar generating process. We train a SG-VAE with 2 hidden layers with 400 hidden units on 20, 000 simulated kernel combinations (from which 4697 where unique) and 5 dimensions in the latent space.