AReS and MaRS Adversarial and MMD-Minimizing Regression for SDEs
Authors: Gabriele Abbati, Philippe Wenk, Michael A. Osborne, Andreas Krause, Bernhard Schölkopf, Stefan Bauer
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate the empirical performance of our method, we conduct several experiments on simulated data, using four standard benchmark systems and comparing against the EKF-based approach by S arkk a et al. (2015) and two GP-based approaches respectively by Vrettas et al. (2015) and Yildiz et al. (2018). |
| Researcher Affiliation | Academia | 1Department of Engineering Science, University of Oxford 2Learning and Adaptive Systems Group, ETH Z urich 3Max Planck ETH Center for Learning Systems 4Empirical Inference Group, Max Planck Institute for Intelligent Systems. |
| Pseudocode | Yes | Algorithm 1 Ancestral sampling for z |
| Open Source Code | Yes | We share and publish our code to facilitate future research at https://github.com/gabb7/ ARe S-Ma RS. |
| Open Datasets | Yes | To evaluate the empirical performance of our method, we conduct several experiments on simulated data, using four standard benchmark systems and comparing against the EKF-based approach by S arkk a et al. (2015) and two GP-based approaches respectively by Vrettas et al. (2015) and Yildiz et al. (2018). |
| Dataset Splits | No | The paper describes using "simulated data" from benchmark systems and conducting "100 independent realizations" for statistical evaluation, but it does not specify explicit training, validation, or test dataset splits in the conventional sense. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or specific computing environments) used for running its experiments. |
| Software Dependencies | No | The paper describes components like neural network layers and kernel types, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, or specific library versions) that would be needed to replicate the experiments. |
| Experiment Setup | Yes | the critic in the adversarial parameter estimation is a 2-layer fully connected neural network, with respectively 256 and 128 nodes. Every batch, for both MMD and adversarial training contains 256 elements. |