On Statistical Learning Theory for Distributional Inputs

Authors: Christian Fiedler, Pierre-François Massiani, Friedrich Solowjow, Sebastian Trimpe

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove two oracle inequalities for kernel machines in general distributional learning scenarios, as well as a generalization result based on algorithmic stability. Our main results are formulated in great generality, utilizing general Hilbertian embeddings, which makes them applicable to a wide array of approaches to distributional learning. Additionally, we specialize our results to the cases of kernel mean embeddings and of the recently introduced Hilbertian embeddings based on sliced Wasserstein distances, providing concrete instances of the general setup.
Researcher Affiliation Academia 1Institute for Data Science in Mechanical Engineering (DSME), RWTH Aachen University, Aachen, Germany.
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statements about open-sourcing code or links to a code repository for the described methodology.
Open Datasets No The paper is theoretical and does not conduct experiments on datasets, thus it does not specify public dataset availability. It mentions examples of applications but does not use them for empirical evaluation.
Dataset Splits No The paper is theoretical and does not describe any experiments, therefore it does not provide dataset split information for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any computational experiments, thus it does not specify hardware used.
Software Dependencies No The paper is theoretical and does not describe any computational experiments or software implementations. Therefore, it does not list software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe any computational experiments. Therefore, it does not provide details about an experimental setup, hyperparameters, or training settings.