Thin and deep Gaussian processes
Authors: Daniel Augusto de Souza, Alexander Nikitin, ST John, Magnus Ross, Mauricio A Álvarez, Marc Deisenroth, João Paulo Gomes, Diego Mesquita, César Lincoln Mattos
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show with theoretical and experimental results that i) TDGP is, unlike previous models, tailored to specifically discover lower-dimensional manifolds in the input data, ii) TDGP behaves well when increasing the number of layers, and iii) TDGP performs well in standard benchmark datasets. |
| Researcher Affiliation | Academia | Daniel Augusto de Souza University College London Alexander Nikitin Aalto University ST John Aalto University Magnus Ross University of Manchester Mauricio A. Álvarez University of Manchester Marc Peter Deisenroth University College London João P. P. Gomes Federal University of Ceará Diego Mesquita Getulio Vargas Foundation César Lincoln C. Mattos Federal University of Ceará |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available as supplementary material at https://github.com/spectraldani/thindeepgps. |
| Open Datasets | Yes | In all experiments, inputs and targets are normalized so that the training set has zero mean and unit variance. Appendix D contains more details of architecture, training, and initialization. ... We also compare the methods on four well-known regression datasets from the UCI repository. |
| Dataset Splits | Yes | We subsample 1,000 points from this region and compare the methods via five-fold cross-validation. ... To assess each model fairly, we adopt a ten-fold separation of the datasets into training and testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions software used: "We implemented the experiments in Python using GPflow [10], GPflux [8], and Keras [2]." However, it does not specify version numbers for Python or the listed libraries. |
| Experiment Setup | No | The paper states, "Appendix D contains more details of architecture, training, and initialization." However, it does not include concrete hyperparameter values or detailed training configurations in the main text provided. |