Implicit Manifold Gaussian Process Regression

Authors: Bernardo Fichera, Slava Borovitskiy, Andreas Krause, Aude G Billard

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our technique on a synthetic low-dimensional example and test it in a high-dimensional large dataset setting of predicting rotation angles of rotated MNIST images, improving over the standard Gaussian process regression. Section 5.1: Synthetic Examples. Section 5.2: High Dimensional Datasets.
Researcher Affiliation Academia Bernardo Fichera1 Viacheslav Borovitskiy2 Andreas Krause2 Aude Billard1 1 EPFL 2 ETH Zürich
Pseudocode No The paper describes the algorithm in Section 4.4 "Resulting Algorithm" using numbered steps, but it is not presented in a formal pseudocode or algorithm block format.
Open Source Code Yes Code available at https://github.com/nash169/manifold-gp.
Open Datasets Yes We consider two MNIST-based datasets. The last dataset, CT slices, has dimensionality of d = 385, we split it to have N = 24075 training samples and 24075 testing samples. Dataset names can be complemented by the fraction of labeled samples, e.g. MR-MNIST-10% refers to n = 10%N.
Dataset Splits No The paper specifies training and testing samples (e.g., "N = 10000 training samples and 1000 testing samples" for SR-MNIST; "N = 24075 training samples and 24075 testing samples" for CT slices), but it does not explicitly mention a separate validation dataset or its split.
Hardware Specification No The paper mentions "GPU acceleration" and "high-memory hardware setups" but does not provide specific hardware details such as exact GPU or CPU models, processor types, or memory amounts used for the experiments.
Software Dependencies No The paper mentions software like FAISS, GPyTorch, PyTorch, and SciPy, but it does not provide specific version numbers for these dependencies.
Experiment Setup Yes We run 100 iterations of hyperparameter optimization using Adam with a fixed learning rate of 0.01. For MNIST, with use IMGP with ν = 2; for CT slices with ν = 3. Given the limited number of iterations per run we opted to fix the number of eigenpairs at 20%N and 2%N, for SR-MNIST and MR-MNIST, respectively.