An analysis of Ermakov-Zolotukhin quadrature using kernels

Authors: Ayoub Belhadji

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate the theoretical results by numerical experiments in Section 5.In this section, we illustrate the theoretical results presented in Section 3 in the case of the RKHS associated to the kernel ... Figure 1 shows log-log plots of the squared error w.r.t. N, averaged over 1000 samples for each point, for s {2, 3}.
Researcher Affiliation Academia Ayoub Belhadji Univ Lyon, ENS de Lyon Inria, CNRS, UCBL LIP UMR 5668, Lyon, France ayoub.belhadji@ens-lyon.fr
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating the release of open-source code for the methodology described.
Open Datasets No The numerical experiments are conducted in a theoretical setting (RKHS associated to a kernel, uniform measure on [0,1]), not on a publicly available dataset in the typical sense of machine learning datasets. The citation [5] refers to a textbook, not a specific dataset.
Dataset Splits No The paper does not describe traditional dataset splits (e.g., train/validation/test) as the experiments are numerical simulations based on theoretical frameworks rather than empirical evaluation on a pre-existing dataset.
Hardware Specification No The paper does not provide any specific details about the hardware used for the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments.
Experiment Setup Yes We take N [5, 100]. Figure 1 shows log-log plots of the squared error w.r.t. N, averaged over 1000 samples for each point, for s {2, 3}. and for KBIQ, it mentions M = 2N and γ = σ, for x that follows the distribution of the projection DPP and for g {e1, e10, e20}.