On kernel-based statistical learning theory in the mean field limit

Authors: Christian Fiedler, Michael Herty, Sebastian Trimpe

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our contributions cover three closely related aspects. 1) We extend and complete the theory of mean field limit kernels and their RKHSs (Section 2). In Theorem 2.3, we precisely describe the relationship between the RKHS of the finite-input kernels and the RKHS of the mean field kernel, completing the results from [22]. In particular, this allows us to interpret the latter RKHS as the mean field limit of the former RKHSs. Furthermore, in Lemma 2.4 and 2.5, we provide inequalities for the corresponding RKHS norms, which are necessary for Γ-convergence arguments. 2) We provide results relevant for approximation with mean field limit kernels (Section 3). With Proposition 3.1, we give a first result on the approximation power of mean field limit kernels, and in Theorem 3.3 we can also provide a representer theorem for these kernels. For its proof, we use a Γ-convergence argument, which is to the best of our knowledge the first time this technique has been used in the context of kernel methods. 3) We investigate the mean field limit of kernels in the context of statistical learning theory (Section 4).
Researcher Affiliation Academia Christian Fiedler1 Michael Herty2 Sebastian Trimpe3 1,3Institute for Data Science in Mechanical Engineering (DSME) RWTH Aachen University, Aachen, Germany {fiedler,trimpe}@dsme.rwth-aachen.de 2 Institute for Geometry and Practical Mathematics (IGPM) RWTH Aachen University, Aachen, Germany herty@igpm.rwth-aachen.de
Pseudocode No The paper is theoretical and focuses on mathematical proofs and definitions; it does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not mention releasing any source code or provide links to code repositories.
Open Datasets No The paper defines theoretical 'data sets' for its statistical learning setup but does not use or provide access information for any actual publicly available or open dataset for empirical training.
Dataset Splits No The paper is theoretical and does not describe empirical experiments that would involve training, validation, or test data splits.
Hardware Specification No The paper is theoretical and does not describe any computational experiments or specify the hardware used.
Software Dependencies No The paper is theoretical and does not describe any computational experiments that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with hyperparameters or specific training configurations.