The Kernel Kalman Rule Ñ Efficient Nonparametric Inference with Recursive Least Squares
Authors: Gregor Gebhardt, Andras Kupcsik, Gerhard Neumann
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show on a nonlinear state estimation task with high dimensional observations that our approach provides a significantly improved estimation accuracy while the computational demands are significantly decreased. |
| Researcher Affiliation | Academia | Gregor H. W. Gebhardt Technische Universit at Darmstadt Hochschulstr. 10 64289 Darmstadt, Germany gebhardt@ias.tu-darmstadt.de Andras Kupcsik School of Computing National University of Singapore 13 Computing Drive, Singapore 117417 kupcsik@comp.nus.edu.sg Gerhard Neumann School of Computer Science University of Lincoln Lincoln, LN6 7TS, UK gneumann@lincoln.ac.uk |
| Pseudocode | No | No explicit pseudocode or algorithm block was found in the paper. |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We used walking and running motions captured from one subject from the Hu Mo D dataset (Wojtusch and von Stryk 2015). |
| Dataset Splits | No | The paper mentions using a 'training set' and 'test data-set' for specific experiments, and a 'validation set' for hyper-parameter optimization, but does not provide specific numerical splits (e.g., percentages or sample counts) for training, validation, or testing data needed for direct reproduction. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., libraries, frameworks, or programming languages with their specific versions) required to replicate the experiments. |
| Experiment Setup | Yes | We train the models with data sets consisting of hidden states, sampled uniformly from the interval [ 2.5, 2.5], and the corresponding measurements, where we add Gaussian noise with standard deviation σ = 0.3. ... In a first experiment, we learned the sub KKF with a kernel size of 800 samples, where we used data windows of size 3 with the 3D positions of all 36 markers as state representation and the current 3D positions of all markers as observations. |