TRF: Learning Kernels with Tuned Random Features
Authors: Alistair Shilton, Sunil Gupta, Santu Rana, Arun Kumar Venkatesh, Svetha Venkatesh8286-8294
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, we have tested TRF on a variety of small and large real-world problems and showed that TRF outperforms comparable methods on most occasions. In this section we present experimental results for TRF applied to classification and regression problems for small and medium sized datasets. Results for regression and classification are shown in Table 4. |
| Researcher Affiliation | Academia | Applied Artificial Intelligence Institute (A2I2), Deakin University, Geelong, Australia {alistair.shilton, sunil.gupta, santu.rana, aanjanapuravenk, svetha.venkatesh}@deakin.edu.au |
| Pseudocode | No | The paper describes algorithms and formulations but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code. |
| Open Datasets | Yes | All datasets are taken from the UCI repository (Dua and Graff 2017), normalised so xi lies in the unit hypersphere and split randomly 80% training and 20% testing. |
| Dataset Splits | Yes | Hyper-parameters were selected to minimise 10-fold cross-validation error on the training set, with λ, Λ, γ, l [0.01, 100], using Bayesian optimisation with GP-UCB acquisition function (Srinivas et al. 2012) with a budget of 105 evaluations (5 in the initial random set). This was repeated for D = 50, 100, 200, 400, 800, 1600 random features to find D to minimise 10-fold cross validation error. |
| Hardware Specification | No | Experiments were run on a Ubuntu 4.15 server with 72 x86 64 cores and 754 GB of memory running SVMHeavy 8. This describes a server environment but lacks specific CPU or GPU models. |
| Software Dependencies | Yes | Experiments were run on a Ubuntu 4.15 server with 72 x86 64 cores and 754 GB of memory running SVMHeavy 8. We have chosen to use Adam (Kingma and Ba 2015) in our experiments. |
| Experiment Setup | Yes | For our TRF method we use an RBF meta-kernel κ with length-scale l, and an RBF reference kernel ˆK with fixed length-scale 1. Hyper-parameters were selected to minimise 10-fold cross-validation error on the training set, with λ, Λ, γ, l [0.01, 100], using Bayesian optimisation with GP-UCB acquisition function (Srinivas et al. 2012) with a budget of 105 evaluations (5 in the initial random set). This was repeated for D = 50, 100, 200, 400, 800, 1600 random features to find D to minimise 10-fold cross validation error. |