Variational Implicit Processes
Authors: Chao Ma, Yingzhen Li, Jose Miguel Hernandez-Lobato
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that VIPs return better uncertainty estimates and lower errors over existing inference methods for challenging models such as Bayesian neural networks, and Gaussian processes. |
| Researcher Affiliation | Collaboration | 1Department of Engineering, University of Cambridge, Cambridge, UK 2Microsoft Research Cambridge, Cambridge, UK. |
| Pseudocode | Yes | Algorithm 1 Variational Implicit Processes (VIP) |
| Open Source Code | No | The paper does not contain an explicit statement providing concrete access to source code for the methodology described. |
| Open Datasets | Yes | We use an IP with a Bayesian neural network (1-10-10-1 architecture) as the prior. We use α = 0 for the wake-step training. We also compare VIP with the exact full GP with optimized compositional kernel (RBF+Periodic), and another BNN with identical architecture but trained using variational dropout (VDO) with dropout rate p = 0.99 and length scale l = 0.001. The (hyper-)parameters are optimized using 500 epochs (batch training) with Adam optimizer (learning rate = 0.01). ... We compare the VIP (α = 0) with a variationally sparse GP (SVGP, 100 inducing points), an exact GP and VDO on the solar irradiance dataset (Lean et al., 1995). ... using real-world multivariate regression datasets from the UCI data repository (Lichman et al., 2013). ... Harvard Clean Energy Project Data, the world s largest materials high-throughput virtual screening effort (Hachmann et al., 2014). |
| Dataset Splits | Yes | The observational noise variance for VIP and VDO is tuned over a validation set, as detailed in Appendix F. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Adam optimizer' but does not specify software names with version numbers for its dependencies. |
| Experiment Setup | Yes | The (hyper-)parameters are optimized using 500 epochs (batch training) with Adam optimizer (learning rate = 0.01). |