Near Input Sparsity Time Kernel Embeddings via Adaptive Sampling
Authors: David Woodruff, Amir Zandieh
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Furthermore, we empirically show that in large-scale regression tasks, our algorithm outperforms state-of-the-art kernel approximation methods. |
| Researcher Affiliation | Academia | 1Carnegie Mellon University, USA 2Ecole polytechnique federale de Lausanne, Switzerland. |
| Pseudocode | Yes | Algorithm 1 RECURSIVE LEVERAGE SCORE SAMPLING and Algorithm 2 ROWSAMPLER FOR POLYNOMIAL KERNEL are provided. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We base our comparison on the four standard large-scale regression datasets evaluated in (Le et al., 2013). The size of the data points is denoted by n and the dimensionality is denoted by d in Table 1. DATASET: WINE INSURANCE CT LOCATION FOREST |
| Dataset Splits | Yes | We use the same hyperparameters (kernel bandwidth and regularization parameter) across all kernel approximation methods which were selected via cross-validation on the Fourier features method, as our base-line method. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU types). |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names with versions). |
| Experiment Setup | Yes | We use the same hyperparameters (kernel bandwidth and regularization parameter) across all kernel approximation methods which were selected via cross-validation on the Fourier features method, as our base-line method. |