Quadrature-based features for kernel approximation

Authors: Marina Munkhoeva, Yermek Kapushev, Evgeny Burnaev, Ivan Oseledets

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We derive the convergence behaviour and conduct an extensive empirical study that supports our hypothesis
Researcher Affiliation Academia Skolkovo Institute of Science and Technology Moscow, Russia Institute of Numerical Mathematics of the Russian Academy of Sciences Moscow, Russia
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code Yes The code for this paper is available at https://github.com/maremun/quffka.
Open Datasets Yes We extensively study the proposed method on several established benchmarking datasets: Powerplant, LETTER, USPS, MNIST, CIFAR100 [23], LEUKEMIA [20].
Dataset Splits No The paper provides dataset names and overall sizes in Table 2, but it does not specify explicit train/validation/test splits (e.g., percentages, sample counts, or references to predefined splits) needed for reproducibility.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Approximation was constructed for different number of SR samples n = D 2(d+1)+1, where d is an original feature space dimensionality and D is the new one. For the Gaussian kernel we set hyperparameter γ = 1 2σ2 to the default value of 1 d for all the approximants.