Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP)

Authors: Andrew Wilson, Hannes Nickisch

ICML 2015 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate SKI for kernel matrix approximation (section 4.1), kernel learning (section 4.2), and natural sound modelling (section 4.3).
Researcher Affiliation Collaboration Andrew Gordon Wilson EMAIL Carnegie Mellon University Hannes Nickisch EMAIL Philips Research Hamburg
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No We have implemented code as an extension to the GPML toolbox (Rasmussen & Nickisch, 2010). For updates and demos, see http://www.cs.cmu.edu/ andrewgw/pattern
Open Datasets Yes We use SKI to model the natural sound time series in Fig 3(a), considered in a different context by Turner (2010).
Dataset Splits No The paper mentions
Hardware Specification Yes All experiments were performed on a 2011 Mac Book Pro, with an Intel i5 2.3 GHz processor and 4 GB of RAM.
Software Dependencies No The paper mentions the
Experiment Setup Yes For SKI, we use cubic interpolation and a 100 100 inducing point grid, equispaced in each input dimension. That is, we have as many inducing points m = 10, 000 as we have training datapoints. We use the same θ initialisation for each approach.