Bayesian Alignments of Warped Multi-Output Gaussian Processes

Authors: Markus Kaiser, Clemens Otte, Thomas Runkler, Carl Henrik Ek

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show results for an artificial data set and real-world data of two wind turbines.
Researcher Affiliation Collaboration Markus Kaiser Siemens AG Technical University of Munich markus.kaiser@siemens.com Clemens Otte Siemens AG clemens.otte@siemens.com Thomas Runkler Siemens AG Technical University of Munich thomas.runkler@siemens.com Carl Henrik Ek University of Bristol carlhenrik.ek@bristol.ac.uk
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement that the source code for the described methodology is open source, nor does it provide a link to a code repository.
Open Datasets No The paper mentions using an 'artificial data set' and 'real data recorded from a pair of neighbouring wind turbines in a wind farm,' but it does not provide concrete access information (link, DOI, specific repository name, or formal citation with authors/year) for either dataset to confirm public availability.
Dataset Splits No The paper mentions 'training data' and 'Test-log-likelihoods' but does not provide specific details on the dataset splits (e.g., percentages or exact counts for training, validation, and test sets) or the methodology for creating these splits. No validation set is explicitly mentioned.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU or GPU models, memory) used to run the experiments.
Software Dependencies No The paper cites 'GPflow: A Gaussian process library using Tensor Flow' in its references, implying the use of these tools, but it does not specify version numbers for any software dependencies, such as Python, GPflow, or TensorFlow, that would be needed for replication.
Experiment Setup No The paper describes some model setup details, such as the use of 'squared exponential kernels' and prior preferences for 'longer length scales and smaller variances,' as well as 'identity mean functions.' However, it lacks specific numerical values for common hyperparameters like learning rate, batch size, number of epochs, or details about the optimizer used, which are essential for reproducing the experimental setup.