Gaussian Processes on Cellular Complexes
Authors: Mathieu Alain, So Takao, Brooks Paige, Marc Peter Deisenroth
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we demonstrate the results of our GP model defined over cellular complexes (hereafter referred to as CCGP) on two examples. First, we demonstrate that CC-GPs can make directed predictions on the edges of a graph by considering the problem of ocean current interpolation. In the second example, we investigate the effect of inter-signal mixing in the reaction-diffusion kernel. We provide details of the experimental setups in Appendix E. |
| Researcher Affiliation | Academia | 1Centre for Artificial Intelligence, University College London, London, UK. 2Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA. |
| Pseudocode | No | The paper presents mathematical definitions and derivations, but it does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper mentions that the models are implemented using the GPJax library, but it does not provide an explicit statement that the authors are releasing their own source code for the work described in the paper, nor does it provide a direct link to a code repository for their specific methodology. |
| Open Datasets | Yes | Consider the geostrophic current data from the NOAA Coast Watch (2023) database. The unprocessed data is retrieved from the (NOAA Coast Watch, 2023) database, which comes in the form of two scalar fields: one representing the x-component and the other the y-component of the geostrophic current vector field. |
| Dataset Splits | No | The paper describes splits for training and testing (e.g., "30% of the data, selected randomly" for training, "a third of the data supported on each type of cell" for training), but it does not explicitly mention or detail a separate validation set split or cross-validation strategy. |
| Hardware Specification | Yes | The training took less than 30 seconds on a Mac Book Pro with M1 chip. The training takes less than a minute on a Mac Book Pro equipped with a M1 Pro chip. |
| Software Dependencies | No | The paper mentions using "GPJax library (Pinder & Dodd, 2022)" and "Adam from Optax (Bradbury et al., 2018)" but does not provide specific version numbers for these libraries or other key software components, only the publication year of the library papers. |
| Experiment Setup | Yes | The objective function is the conjugate marginal log-likelihood and the optimiser is an implementation of Adam from Optax (Bradbury et al., 2018) with a learning rate set at 0.1. For the training of the graph Mat ern GP, the smoothness hyperparameter ν is fixed at 2. The amplitude and lengthscale hyperparameters σ2, ℓare both initialised at 1.0 and optimised for 1000 iterations using Adam. When training CC-Mat ern GP on edges, the smoothness hyperparameter ν is set to 2, and the amplitude and lengthscale hyperparameters σ2, ℓare initialised at 1.0, before optimising them for 1000 iterations using Adam. For training the CC-Mat ern GP, the smoothness hyperparameter ν is fixed at 2, and the amplitude and lengthscale hyperparameters σ2, ℓare both initialised at 1.5, before optimising them for 1000 iterations using Adam. The training of RD-GP is similar: The smoothness hyperparameter ν is fixed at 2, and the amplitude hyperparameter σ2, the reaction coefficient r, the diffusion coefficient d, and the cross-diffusion coefficient c are all initialised at 1.5. They are then optimised for 1000 iterations using Adam. For training the CC-Mat ern GP, the smoothness hyperparameter ν is fixed at 2, and the amplitude and lengthscale hyperparameters σ2, ℓare both initialised at 1.5, before optimising them for 1000 iterations using Adam. The training of RD-GP is similar: The smoothness hyperparameter ν is fixed at 2, the amplitude hyperparameter σ2, the reaction coefficient r and the diffusion coefficient d are initialised at 1.5. The cross-diffusion coefficient c is initialised at 2.5. They are then optimised for 1000 iterations using Adam. |