Probabilistic Linear Solvers for Machine Learning

Authors: Jonathan Wenger, Philipp Hennig

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments This section demonstrates the functionality of Algorithm 1. We choose some deliberately simple example problems from machine learning and scientific computation, where the solver can be used to quantify uncertainty induced by finite computation, solve multiple consecutive linear systems, and propagate information between problems. and Table 2: Uncertainty calibration for kernel matrices. Monte Carlo estimate w Ex [w(x )] measuring calibration given 105/n sampled linear problems of the form (K + ε2I)x = b for each kernel and calibration method.
Researcher Affiliation Academia Jonathan Wenger Philipp Hennig University of Tübingen Max Planck Institute for Intelligent Systems Tübingen, Germany {jonathan.wenger, philipp.hennig}@uni-tuebingen.de
Pseudocode Yes Algorithm 1: Probabilistic Linear Solver with Uncertainty Calibration
Open Source Code Yes We provide an open-source implementation of Algorithm 1 as part of PROBNUM, a Python package implementing probabilistic numerical methods, in an online code repository: https://github.com/probabilistic-numerics/probnum
Open Datasets Yes We apply various differentiable kernels to the airline delay dataset from January 2020 [34]. and US Department of Transportation. Airline on-time performance data. https://www.transtats.bts.gov/, 2020. Accessed: 2020-05-26.
Dataset Splits No The paper mentions 'randomly sampled test problems' but does not provide specific details on training, validation, or test splits (percentages, sample counts, or predefined split citations).
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running experiments were mentioned.
Software Dependencies No The paper mentions 'PROBNUM, a Python package' but does not specify version numbers for Python or any other key software libraries or solvers.
Experiment Setup Yes Policy and Step Size In each iteration our solver collects information about the linear operator A via actions si determined by the policy π(s | A, H, x, A, b). The next action si = E[H]ri 1 is chosen based on the current belief about the inverse. and Stopping Criteria Classic linear solvers typically use stopping criteria based on the current residual of the form Axi b 2 max(δrtol b 2, δatol) for relative and absolute tolerances δrtol and δatol. and This can be interpreted as a form of hyperparameter optimization similar to optimization of kernel parameters in GP regression.