Learning Adaptive Random Features
Authors: Yanjun Li, Kai Zhang, Jun Wang, Sanjiv Kumar4229-4236
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This section reports empirical evaluations in numerical kernel matrix approximation and supervised learning tasks. |
| Researcher Affiliation | Collaboration | 1University of Illinois at Urbana-Champaign, 2Temple University, 3East China Normal University, 4Google Research |
| Pseudocode | Yes | Algorithm 1 Learning Fourier Features |
| Open Source Code | No | The paper does not provide an explicit statement or link to its own open-source code for the described methodology. |
| Open Datasets | Yes | The benchmark datasets used are listed in Table 1. |
| Dataset Splits | Yes | All data samples are split into training/test sets (2 : 1), unless provided in the original data. We tune the parameters via cross validation on training set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names like PyTorch 1.9 or solver versions) needed to replicate the experiments. |
| Experiment Setup | Yes | Input data is normalized to have zero mean and unit variance in each dimension, and the Gaussian kernel width 2σ2 is chosen as the dimension d of the input data... We tune the parameters via cross validation on training set. For number of features r = 50, 100, and 200, we run SAMPLE and CLUSTERp with number of landmarks n = r/25, r/5, r, and 5r. ... All the experiments use n = r = 200. For regression, we use ridge regression and report root mean square error (RMSE); for classification, we use ℓ2-regularized SVM and report classification error. |