Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning
Authors: Sattar Vakili, Jonathan Scarlett, Da-Shan Shiu, Alberto Bernacchia
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning. Our confidence intervals lead to improved performance bounds in both regression and optimization problems. Despite this remarkable empirical success, significant gaps remain in the existing results for the analytical bounds on the error due to approximation. In this work, we provide novel confidence intervals for the Nystr om method and the sparse variational Gaussian process approximation method, which we establish using novel interpretations of the approximate (surrogate) posterior variance of the models. |
| Researcher Affiliation | Collaboration | 1Media Tek Research 2National University of Singapore. Correspondence to: Sattar Vakili <sattr.vakili@mtkresearch.com>. |
| Pseudocode | Yes | B. Pseudo-Code for S-BPE Pseudo-code for S-BPE is provided in Algorithm 1 below. Algorithm 1 Sparse Batched Pure Exploration (S-BPE) |
| Open Source Code | No | No explicit statement about providing open-source code for the methodology described in this paper or a direct link to a source-code repository was found. |
| Open Datasets | No | The paper does not describe experiments run on a specific dataset, so no information about dataset availability for training is provided. |
| Dataset Splits | No | The paper focuses on theoretical contributions and does not present experimental results with specific dataset splits (training, validation, test). |
| Hardware Specification | No | The paper focuses on theoretical analysis and algorithm design, and thus does not describe specific hardware used for experiments. |
| Software Dependencies | No | The paper mentions Tensor Flow and GPflow as libraries where SVGP models are implemented in practice, but does not specify software dependencies with version numbers for the work presented. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with specific hyperparameters, training configurations, or system-level settings. |