Kernel interpolation with continuous volume sampling

Authors: Ayoub Belhadji, Rémi Bardenet, Pierre Chainais

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6. A Numerical Simulation To illustrate Theorem 4, we let X = [0, 1] and dω be the uniform measure on X. Let the RKHS kernel be (Berlinet & Thomas-Agnan, 2011) ks(x, y) = 1 + 2 X 1 m2s cos(2πm(x y)), (45) so that F = Fs is the Sobolev space of order s on [0, 1]. The Mercer decomposition (3) of ks is such that σ1 = 1, e1 1 is constant and, for n 1, σ2n = σ2n+1 = 1/n2s, e2n(x) = 2 cos(2πnx), e2n+1(x) = 2 sin(2πnx). (46) Note that ks can be expressed in closed form using Bernoulli polynomials (Wahba, 1990). In Theorem 4, (24) decomposes the expected interpolation error of any µg in terms of the interpolation error ϵm(N) of the µem. Therefore, it is sufficient to numerically check the values of the ϵm(N). As an illustration we consider g {e1, e5, e7} in (24), so that µem = Σem = σmem, with m {1, 5, 7}. We use the Gibbs sampler proposed by (Rezaei & Gharan, 2019) to approximate continuous volume sampling. We consider various numbers of points N [2, 20]. Figure 2 shows log-log plots of the theoretical (th) value of EVS µg ΠT (x)µg 2 F compared to its empirical (emp) counterpart, vs. N, for s {1, 2}. For each N, the estimate is an average over 50 independent samples, each sample resulting from 500 individual Gibbs iterations. For both values of the smoothness parameter s, we observe a close fit of the estimate with the actual expected error.
Researcher Affiliation Academia 1Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, Villeneuve d'Ascq, France.
Pseudocode No The paper describes algorithms and sampling methods, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No We leave investigating the efficiency of such an MCMC approach to volume sampling for future work. (...) Investigating efficient samplers and their impact on bounds is deferred to future work; the current paper is a theoretical motivation for further methodological research. The paper does not provide concrete access to source code for the methodology described.
Open Datasets No To illustrate Theorem 4, we let X = [0, 1] and dω be the uniform measure on X. Let the RKHS kernel be (Berlinet & Thomas-Agnan, 2011) ks(x, y) = 1 + 2 X 1 m2s cos(2πm(x y)), (45) so that F = Fs is the Sobolev space of order s on [0, 1]. The paper defines a mathematical space for its numerical simulation but does not provide a publicly available dataset with concrete access information.
Dataset Splits No We consider various numbers of points N [2, 20]. (...) For each N, the estimate is an average over 50 independent samples, each sample resulting from 500 individual Gibbs iterations. The paper describes simulation parameters and repetitions but does not specify train/validation/test splits of a dataset.
Hardware Specification No The paper does not provide any specific hardware details used for running the simulations.
Software Dependencies No We use the Gibbs sampler proposed by (Rezaei & Gharan, 2019) to approximate continuous volume sampling. This mentions a method from another paper, but does not list specific software dependencies with version numbers.
Experiment Setup Yes We consider various numbers of points N [2, 20]. Figure 2 shows log-log plots of the theoretical (th) value of EVS µg ΠT (x)µg 2 F compared to its empirical (emp) counterpart, vs. N, for s {1, 2}. For each N, the estimate is an average over 50 independent samples, each sample resulting from 500 individual Gibbs iterations.