Posterior Contraction Rates for Matérn Gaussian Processes on Riemannian Manifolds

Authors: Paul Rosa, Slava Borovitskiy, Alexander Terenin, Judith Rousseau

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate these rates empirically on a number of examples, which, mirroring prior work, show that intrinsic processes can achieve better performance in practice. Therefore, our work shows that finer-grained analyses are needed to distinguish between different levels of data-efficiency of geometric Gaussian processes, particularly in settings which involve small data set sizes and non-asymptotic behavior.
Researcher Affiliation Academia Paul Rosa University of Oxford Viacheslav Borovitskiy ETH Zürich Alexander Terenin University of Cambridge and Cornell University Judith Rousseau University of Oxford
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code available at: HTTPS://GITHUB.COM/ATERENIN/GEOMETRIC_ASYMPTOTICS.
Open Datasets Yes We use three manifolds, each represented by a mesh: (i) a dumbbell-shaped manifold represented as a mesh with 1556 nodes, (ii) a sphere represented by an icosahedral mesh with 2562 nodes, and (iii) the Stanford dragon mesh, preprocessed to keep only its largest connected component, which has 100179 nodes. For the sphere, we also considered a finer icosahedral mesh with 10242 nodes, but this was found to have virtually no effect on the computed pointwise expected errors.
Dataset Splits No The paper mentions 'data sizes' and 'test set' but does not specify clear training, validation, or test dataset splits (e.g., exact percentages or sample counts for each split).
Hardware Specification No The paper does not explicitly describe the hardware used for its experiments with specific models, processors, or memory details.
Software Dependencies No The paper mentions 'GPJAX [38] and the GEOMETRIC KERNELS library' but does not provide specific version numbers for these software components.
Experiment Setup Yes We use extrinsic Matérn and Riemannian Matérn kernels with the following hyperparameters: σ2 f = 1 and σ2 ε = 0.0005. For the truncated Karhunen Loève expansion, we used J = 500 eigenpairs obtained from the mesh. We selected smoothness values to ensure norm-equivalence of the intrinsic and extrinsic kernels reproducing kernel Hilbert spaces, which was ν = 5/2 for the intrinsic model, and ν = 5/2 + d/2 for the extrinsic model, where d is the manifold s dimension. We used different length scales for each manifold: κ = 200 for the dumbbell, κ = 0.25 for the sphere, and κ = 0.05 for the dragon... We used ADAM with a learning rate of 0.005, and an initialization equal to the length scale κ of the intrinsic model... We ran the optimizer for a total of 1000 steps.