Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Extending Mercer's expansion to indefinite and asymmetric kernels

Authors: Sungwoo Jeong, Alex Townsend

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper we extend Mercer s expansion to continuous kernels, providing a rigorous theoretical underpinning for indefinite and asymmetric kernels. We generalize Mercer s expansion to a general continuous kernel, where the kernel may be indefinite or asymmetric. The following two theorems summarize our main theoretical results about Mercer s expansion for general continuous kernels (see Section 3).
Researcher Affiliation Academia Sungwoo Jeong Department of Mathematics Cornell University EMAIL Alex Townsend Department of Mathematics Cornell University EMAIL
Pseudocode Yes 5 COMPUTING MERCER S EXPANSION FOR GENERAL KERNELS. The procedure for computing the low-rank SVD of KR involves the following steps, which is essentially a fast way to compute a Mercer s expansion of a finite rank kernel: 1. Perform two QR decompositions of Φ(x) and Ψ(y), using a function analogue of Householder QR (Trefethen, 2010). We can write this QR decomposition as Φ(x) = Qleft(x)R1 and Ψ(y) = Qright(y)R2, where R1, R2 RR R and the columns of Qleft(x) and Qright(y) are orthonormal functions. 2. Compute the SVD of an R R matrix formed by R1CR 2 = UΣV . 3. Construct the final SVD-based approximation by combining the singular values and the orthonormalized functions to form: Pj=1 σjuj(x)vj(y), (10) where σ1, . . . , σR are the diagonal entries of Σ, uj(x) = PR s=1 Usj Qleft s (x), and vj(y) = PR s=1 Vsj Qright s (y).
Open Source Code No The paper does not provide an explicit statement or link indicating that the source code for the methodology described is publicly available.
Open Datasets No The paper presents theoretical work and does not conduct experiments requiring datasets.
Dataset Splits No The paper is theoretical and does not involve experiments with dataset splits.
Hardware Specification No The paper focuses on theoretical contributions and does not describe hardware used for experiments.
Software Dependencies No The paper mentions 'Chebfun' as an implementation for a part of the algorithm, but it does not specify software dependencies with version numbers for reproducing the methodology described in this paper.
Experiment Setup No The paper is theoretical and does not present an experimental setup with hyperparameters or training configurations.