Kernel Observers: Systems-Theoretic Modeling and Inference of Spatiotemporally Evolving Processes

Authors: Hassan A. Kingravi, Harshal R. Maske, Girish Chowdhary

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our approach outperforms existing methods in numerical experiments.3 Experimental Results
Researcher Affiliation Collaboration Hassan A. Kingravi Pindrop Atlanta, GA 30308 hkingravi@pindrop.com Harshal Maske and Girish Chowdhary University of Illinois at Urbana Champaign Urbana, IL 61801 hmaske2@illinois.edu, girishc@illinois.edu
Pseudocode Yes Algorithm 1 Measurement Map e K
Open Source Code No The paper does not provide any explicit statement about releasing source code for the methodology described, nor does it include a link to a code repository.
Open Datasets Yes Our first dataset, the Intel Berkeley research lab temperature data, consists of 50 wireless temperature sensors in indoor laboratory region spanning 40.5 meters in length and 31 meters in width3. The second dataset is the Irish wind dataset, consisting of daily average wind speed data collected from year 1961 to 1978 at 12 meteorological stations in the Republic of Ireland4.3http://db.csail.mit.edu/labdata/labdata.html 4http://lib.stat.cmu.edu/datasets/wind.desc
Dataset Splits No The paper describes training and testing data splits, but it does not explicitly mention or describe a validation dataset or split. For example, for the Intel Berkeley dataset, it states: "Training data consists of temperature data on March 6th 2004... Testing is performed over another 72 timesteps..." without mentioning a validation set.
Hardware Specification No The paper mentions "high computational requirements" for nonstationary kernel methods, but it does not specify any hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions the use of a "sparse Gaussian Process model", "Gaussian process inference", "matrix least-squares", and a "Kalman filter" (Algorithms 3 and 4 in supplementary), but it does not provide specific version numbers for any software, libraries, or solvers used.
Experiment Setup Yes Model inference for the kernel observer involved three steps: 1) picking the Gaussian RBF kernel k(x, y) = e x y 2/2σ2, a search for the ideal σ is performed for a sparse Gaussian Process model (with a fixed basis vector set C selected using the method in [3]. For the data set discussed in this section, the number of basis vectors were equal to the number of sensing locations in the training set, with the domain for input set defined over R2; 2) having obtained σ, Gaussian process inference is used to generate weight vectors for each time-step in the training set, resulting in the sequence wτ, τ {1, . . . , T}; 3) matrix least-squares is applied to this sequence to infer b A (Algorithm 3 in the supplementary). For prediction in the autonomous setup, b A is used to propagate the state wτ forward to make predictions with no feedback, and in the observer setup, a Kalman filter (Algorithm 4 in the supplementary) with N determined using Proposition 2, and locations picked randomly, is used to propagate wτ forward to make predictions. ... First, we pick sets of points C(ι) = {c1, . . . , c Mι}, cj Ω, M = 50, and construct a dynamics matrix A = Λ RM M, with cyclic index 5. We pick the RBF kernel k(x, y) = e x y 2/2σ2, σ = 0.02. ... The cyclic index of b A was determined to be 2, so N was set to 2 for the kernel observer with feedback. ... The cyclic index of b A was determined to be 2. ... To build the autonomous kernel observer and general kernel observer models, we followed the same procedure outlined in Section 3.2, but with C = {c1, . . . , c M}, cj R2, |C| = 300. Cyclic index of b A was determined to be 250 and hence the Kalman filter for kernel observer model using N {250, 500, 1000} at random locations was utilized to track the system state given a random initial condition w0.