Matérn Gaussian Processes on Riemannian Manifolds

Authors: Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Deisenroth (he/him)

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here we showcase two examples to illustrate the theory: dynamical system prediction and sample path visualization.
Researcher Affiliation Academia Viacheslav Borovitskiy 1,4 Alexander Terenin 2 Peter Mostowsky 1 Marc Peter Deisenroth3 1St. Petersburg State University 2Imperial College London 3University College London 4St. Petersburg Department of Steklov Mathematical Institute of Russian Academy of Sciences
Pseudocode No No structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures) are present in the paper.
Open Source Code Yes Code available at HTTPS://GITHUB.COM/SPBU-MATH-CS/RIEMANNIAN-GAUSSIAN-PROCESSES and HTTPS://GITHUB.COM/ATERENIN/SPARSEGAUSSIANPROCESSES.JL.
Open Datasets Yes We take M to be the dragon manifold from the Stanford 3D scanning repository, modified slightly to remove components not connected to the outer surface.
Dataset Splits No The paper mentions 'training proceeds using mini-batch stochastic variational inference' but does not provide specific dataset split information (e.g., exact percentages, sample counts, or citations to predefined splits) for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions software like the 'Firedrake package' and implies the use of Julia (from the .JL extension in the GitHub link), but it does not provide specific version numbers for these or other ancillary software components.
Experiment Setup Yes Following Hensman et al. [21], training proceeds using mini-batch stochastic variational inference with automatic relevance determination. The full setup is given in Appendix A.