Fast Kernel Learning for Multidimensional Pattern Extrapolation

Authors: Andrew G Wilson, Elad Gilboa, Arye Nehorai, John P. Cunningham

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We recover sophisticated out of class kernels, perform texture extrapolation, inpainting, and video extrapolation, and long range forecasting of land surface temperatures, all on large multidimensional datasets, including a problem with 383,400 training points. The proposed method significantly outperforms alternative scalable and flexible Gaussian process methods, in speed and accuracy.
Researcher Affiliation Academia Andrew Gordon Wilson CMU Elad Gilboa WUSTL Arye Nehorai WUSTL John P. Cunningham Columbia
Pseudocode No The paper describes algorithmic steps and procedures, but does not contain a formal pseudocode block or clearly labeled algorithm.
Open Source Code No The paper uses an implementation provided by other authors for comparison (SSGP) but does not state that its own source code (GPatt) is publicly available.
Open Datasets No We extrapolate the missing region, shown in Figure 1a, on a real metal tread plate texture. In Table 1 we compare the test performance of GPatt with SSGP, and GPs using SE, MA, and RQ kernels, for extrapolating five different patterns, with the same train test split as for the tread plate pattern in Figure 1. All patterns are shown in the supplement.
Dataset Splits Yes There are 12675 training instances (Figure 1a), and 4225 test instances (Figure 1b).
Hardware Specification Yes Experiments were run on a 64bit PC, with 8GB RAM and a 2.8 GHz Intel i7 processor.
Software Dependencies No implemented in GPML (http://www.gaussianprocess.org/ gpml).
Experiment Setup No We use a simple initialisation scheme: any frequencies {µa} are drawn from a uniform distribution from 0 to the Nyquist frequency (1/2 the sampling rate), length-scales {1/σa} from a truncated Gaussian distribution, with mean proportional to the range of the data, and weights {wa} are initialised as the empirical standard deviation of the data divided by the number of components used in the model.