Towards a Unified Analysis of Random Fourier Features

Authors: Zhu Li, Jean-Francois Ton, Dino Oglic, Dino Sejdinovic

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we report the results of our numerical experiments (on both simulated and real-world datasets) aimed at validating our theoretical results and demonstrating the utility of Algorithm 1.
Researcher Affiliation Academia 1Department of Statistics, University of Oxford, United Kingdom 2Department of Informatics, King s College London, United Kingdom.
Pseudocode Yes Algorithm 1 APPROXIMATE LEVERAGE WEIGHTED RFF
Open Source Code No The paper references third-party software like LIBSVM and scikit-learn that they used, but does not provide a link or statement about open-sourcing their own implementation code for the work described in the paper.
Open Datasets Yes We use four datasets from Chang & Lin (2011) and Dheeru & Karra Taniskidou (2017) for this purpose, including two for regression and two for classification: CPU, KINEMATICS, COD-RNA and COVTYPE. ... Dheeru, D. and Karra Taniskidou, E. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml.
Dataset Splits Yes The Gaussian/RBF kernel is used for all the datasets with hyper-parameter tuning via 5-fold inner cross validation.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as CPU or GPU models, or cloud computing specifications.
Software Dependencies No The paper mentions using "the ridge regression and SVM package from Pedregosa et al. (2011)" (scikit-learn) and referring to "LIBSVM" (Chang & Lin, 2011), but it does not specify any version numbers for these software packages or other dependencies.
Experiment Setup No The paper mentions "hyper-parameter tuning via 5-fold inner cross validation" for the Gaussian/RBF kernel but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or other detailed training configurations.