Extended and Unscented Gaussian Processes

Authors: Daniel M Steinberg, Edwin V. Bonilla

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of our algorithms on a number of synthetic inversion problems and a binary classification dataset. Table 1 shows results for multiple differentiable forward models, g( ). We test the EGP and UGP against the model in [9] which uses 10,000 samples to evaluate the one dimensional expectations. The results are summarized in Table 2, where we report the average Bernoulli negative logprobability (NLP), the error rate and the learned hyperparameter values for the GPs.
Researcher Affiliation Academia Daniel M. Steinberg NICTA daniel.steinberg@nicta.com.au Edwin V. Bonilla The University of New South Wales e.bonilla@unsw.edu.au NICTA is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Centre of Excellence Program.
Pseudocode No The paper describes algorithms through mathematical equations and textual explanations, but it does not contain a structured pseudocode block or an explicitly labeled algorithm section.
Open Source Code No The paper does not provide any explicit statement about releasing the source code for the methodology described, nor does it provide a direct link to a code repository.
Open Datasets Yes We use the USPS handwritten digits dataset with the task of distinguishing between 3 and 5 this is the same experiment from [3, 3.7.3].
Dataset Splits Yes 1000 points are generated in this way, and we use 5-fold cross validation to train (200 points) and test (800 points) the GPs.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as CPU/GPU models, memory, or cloud computing instances.
Software Dependencies No Specifically we use derivative-free optimization methods (e.g. BOBYQA) from the NLopt library [15]... GPML Matlab toolbox [3]... and logistic regression (both from the scikitlearn python library [17]). (The paper mentions software libraries used but does not provide specific version numbers for them.)
Experiment Setup Yes All GP methods use Matérn 5/2 covariance functions with the hyperparameters and σ2 initialized at 1.0 and lower-bounded at 0.1 (and 0.01 for σ2). A logistic sigmoid is used as the forward model, g( ), in our algorithms. A squared exponential kernel with amplitude σse and length scale lse is used for the GPs in this experiment. We initialize these hyperparameters at 1.0, and put a lower bound of 0.1 on them. The hyperparameters for the SVM are learned using grid search with three-fold cross validation.