Kernel Quadrature with Randomly Pivoted Cholesky

Authors: Ethan Epperly, Elvira Moreno

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Theoretical and numerical results show that randomly pivoted Cholesky is fast and achieves comparable quadrature error rates to more computationally expensive quadrature schemes... The quadrature error for f(x, y) = sin(x) exp(y), g 1, and different numbers n of quadrature nodes for RPCHOLESKY kernel quadrature, kernel quadrature with nodes drawn iid from µ/µ(X), and Monte Carlo quadrature are shown in fig. 1b. ... Errors for the different methods are shown in fig. 2 (left panels). ... Results are shown in fig. 3.
Researcher Affiliation Academia Ethan N. Epperly and Elvira Moreno Department of Computing and Mathematical Sciences California Institute of Technology Pasadena, CA 91125 {eepperly,emoreno2}@caltech.edu
Pseudocode Yes Algorithm 1 RPCHOLESKY: unoptimized implementation. Algorithm 2 RPCHOLESKY with rejection sampling. Algorithm 3 Helper subroutine to evaluate residual kernel. Algorithm 4 RPCHOLESKY with optimized rejection sampling.
Open Source Code Yes Our code is available at https://github.com/eepperly/RPCholesky-Kernel-Quadrature.
Open Datasets Yes For X, we use 2 104 randomly selected points from the QM9 dataset [33, 37, 40].
Dataset Splits No The paper describes the methods and how performance is quantified using `Err(S, w; g)` but does not provide specific train/validation/test dataset splits or methodologies for creating them.
Hardware Specification Yes Experiments were run on a Mac Book Pro with a 2.4 GHz 8-Core Intel Core i9 CPU and 64 GB 2667 MHz DDR4 RAM.
Software Dependencies No The paper mentions software like Cheb Fun, goodpoints package, and DScribe package, but does not provide specific version numbers for these dependencies.
Experiment Setup Yes In our experiments, we initialize with s1, . . . , sn drawn iid from µ and run for 10n MCMC steps. We use g = 4, δ = 0.5, and four bins for the COMPRESS++ algorithm. To compute the optimal weights (8), we add a small multiple of the identity to regularize the system: w , reg = (k(S, S) + 10εmach trace(k(S, S)) I) 1Tg(S). Here, εmach = 2 52 is the double precision machine epsilon.