Healing Products of Gaussian Process Experts
Authors: Samuel Cohen, Rendani Mbuvha, Tshilidzi Marwala, Marc Deisenroth
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Throughout this section, we evaluate the performance of our approaches to calibrating GP experts when applied to regression and classification, while comparing with sparse variational methods and previous approaches to local-expert weighting and averaging. We consider performance metrics including the negative log-predictive density (NLPD), and the root mean squared error (RMSE). |
| Researcher Affiliation | Academia | 1Department of Computer Science, University College London, UK 2Institute of Intelligent Systems, University of Johannesburg, South Africa 3School of Statistics and Actuarial Science, University of the Witwatersrand, South Africa. |
| Pseudocode | No | The paper does not contain any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code available at https://github.com/ samcohen16/Healing-POEs-ICML |
| Open Datasets | Yes | 3Datasets are from https://github.com/ hughsalimbeni/bayesian_benchmarks. and We now assess the classification performance of expert models in a non-conjugate multi-class classification setting (MNIST dataset). |
| Dataset Splits | No | The paper explicitly mentions a 'training/test split of 60, 000/10, 000 images' for the MNIST dataset. However, it does not provide specific details on validation splits or general split methodologies for all datasets used, such as percentages or sample counts for training, validation, and testing partitions. |
| Hardware Specification | No | The paper discusses distributing computation across 'computing units' but does not provide specific details about the hardware used, such as GPU/CPU models, memory, or cloud instance types. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9'). |
| Experiment Setup | Yes | For softmax weightings, we use a temperature of 100, which performs well across small and large-scale benchmarks. We reduce the dimensionality of images with PCA (20 principal components). We assign 500 training points to each SVGP expert, and provide them with 100 trainable inducing inputs each. We use a multiclass likelihood with a robust-max link function. |