PAC-Bayes Generalisation Bounds for Dynamical Systems including Stable RNNs

Authors: Deividas Eringis, John Leth, Zheng-Hua Tan, Rafael Wisniewski, Mihály Petreczky

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we 1) formalise the learning problem, 2) derive a PAC-Bayesian error bound for such systems, 3) discuss various consequences of this error bound, and 4) show an illustrative example, with discussions on computing the proposed bound.
Researcher Affiliation Academia 1Department of Electronic Systems, Aalborg University, Denmark 2 Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code for this example is available in Git repository in (Eringis 2023).
Open Datasets No The paper uses a synthetic example: "We generate data using equation 11 by sampling eg(t) from a truncated Gaussian distribution." This is generated data, not a publicly available dataset with concrete access information like a link or formal citation.
Dataset Splits No The paper describes generating synthetic data for an illustrative example and evaluating it on "10 different realisations of data." It does not specify training, validation, or test dataset splits typically used for model evaluation on fixed datasets.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes We randomly chose a generator as in Assumption 1 with: sg(t + 1) = Re Lu (Agsg(t) + Bgeg(t) + bs,g) , (11a) y(t) x(t) = tanh (Cgsg(t) + Dgeg(t) + by,g) , (11b) with ns = 2, ny = 1, nx = 1, see numerical values of the weights (Ag, Bg, bs,g, Cg, Dg, by,g) in (Eringis et al. 2023a, equation 213, Appendix B). ... The predictors use Relu and tanh activation functions, and 2 hidden states, and all weights are parameterised including the initial state. The loss function is square loss. ... The prior is chosen as π = N(0, σ2I), with σ2 = 0.02. The posterior is the Gibbs posterior, i.e. ˆρN(θ) π(θ)e λN ˆ L(θ) with λN = N.