Modular Gaussian Processes for Transfer Learning

Authors: Pablo Moreno-Muñoz, Antonio Artes, Mauricio Álvarez

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive results illustrate the usability of our framework in large-scale and multitask experiments, also compared with the exact inference methods in the literature.
Researcher Affiliation Academia Pablo Moreno-Muñoz Antonio Artés-Rodríguez Mauricio A. Álvarez Section for Cognitive Systems, Technical University of Denmark (DTU) Dept. of Signal Theory and Communications, Universidad Carlos III de Madrid, Spain Evidence-Based Behavior (e B2), Spain Dept. of Computer Science, University of Sheffield, UK
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes We provide Pytorch code that allows to easily learn the meta models from GP modules.1 It also includes the baseline methods used. The code is publicly available in the repository: https://github.com/pmorenoz/Modular GP/.
Open Datasets Yes Figure 2: Modular GPs for {0, 1} MNIST data samples... (iv) Banana dataset... (v) Airline delays (US flight data): We took data of US airlines from 2008 (1.5M)... (vi) London household: Based on Hensman et al. (2013), we obtained the register of properties sold in the Greater London County during 2017.
Dataset Splits No The paper mentions 'test NLPD' for results but does not explicitly provide details about validation splits or how the data was partitioned for training, validation, and testing.
Hardware Specification No The paper mentions 'providing the computational resources' but does not specify any hardware details like GPU/CPU models, memory, or specific computing environments used for experiments.
Software Dependencies No The paper states 'We provide Pytorch code' but does not list specific version numbers for PyTorch, Python, or any other software dependencies.
Experiment Setup No For standard optimization, we used the Adam algorithm (Kingma and Ba, 2015). Details about strategies for initialization and optimization are provided in the appendix. (These details are not in the main text provided).