Koopman Kernel Regression
Authors: Petar Bevanda, Max Beier, Armin Lederer, Stefan Sosnowski, Eyke Hüllermeier, Sandra Hirche
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments demonstrate superior forecasting performance compared to Koopman operator and sequential data predictors in RKHS. |
| Researcher Affiliation | Academia | Petar Bevanda TU Munich petar.bevanda@tum.de Max Beier TU Munich max.beier@tum.de Armin Lederer TU Munich armin.lederer@tum.de Stefan Sosnowski TU Munich sosnowski@tum.de Eyke Hüllermeier LMU Munich eyke@ifi.lmu.de Sandra Hirche TU Munich hirche@tum.de |
| Pseudocode | Yes | For a better overview, the pseudocode for regression and forecasting of our method are shown in Algorithm 1. Algorithm 1 Regression and LTI Forecasts using KKR |
| Open Source Code | Yes | Along with code for reproduction of our experiments, we provide a JAX [76] reliant Python module implementing a sklearn [77] compliant KKR estimator at https://github.com/TUM-ITR/koopcore. |
| Open Datasets | No | Sample Trajectories, N=200, H=14, Flow past a cylinder data. The paper refers to data used but does not provide concrete access information (link, DOI, specific repository, or formal citation with authors/year) for a publicly available or open dataset. |
| Dataset Splits | No | Sample trajectories both on training and testing data. The paper refers to training and testing data and discusses generalization, but does not provide specific details about validation data splits (percentages, counts, or methodology for creation). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types) used for running its experiments. |
| Software Dependencies | No | we provide a JAX [76] reliant Python module implementing a sklearn [77] compliant KKR estimator at https://github.com/TUM-ITR/koopcore. The paper mentions software components (JAX, sklearn) but does not provide specific version numbers for them. |
| Experiment Setup | Yes | For fairness, the same kernel and hyperparameters are chosen for our KKR, PCR (EDMD), RRR [23] and regression with signature kernels (Sig-PDE) [48]... for respectively optimal DKKR=100, DPCR=10 and 15 delays for Sig-PDEs... RBF kernel lengthscale ℓ |