Fast Allocation of Gaussian Process Experts
Authors: Trung Nguyen, Edwin Bonilla
ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that on medium-sized datasets (of around 104 training points) it trains up to 5 times faster than FITC while achieving comparable accuracy. On a large dataset of 105 training points, our method significantly outperforms six competitive baselines while requiring only a few hours of training. |
| Researcher Affiliation | Collaboration | Trung V. Nguyen VTNGUYEN@NICTA.COM.AU ANU & NICTA Edwin V. Bonilla EDWIN.BONILLA@NICTA.COM.AU NICTA & ANU |
| Pseudocode | No | Explanation: The paper describes its algorithms using textual descriptions and mathematical equations, but it does not include any structured pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The code for this paper is available at http://trungngv.github.io/fagpe/. |
| Open Datasets | Yes | kin40k (8 dimensions, 10000 training, 30000 testing), pumadyn-32nm (32 dimensions, 7168 training, 1024 testing), and pole telecom (26 dimensions, 10000 training, 5000 testing). We use the exact split as in L azaro-Gredilla et al. (2010) and Snelson & Ghahramani (2006). We extract the first 105 songs from this dataset for training and keep the original set of 51630 songs for testing. |
| Dataset Splits | No | Explanation: The paper provides training and testing splits for its datasets but does not explicitly mention or specify details for a separate validation set. |
| Hardware Specification | Yes | The experiments are executed on an Intel(R) Core(TM) i7-2600 3.40GHz CPU with 8GB of RAM using Matlab R2012a. |
| Software Dependencies | Yes | The experiments are executed on an Intel(R) Core(TM) i7-2600 3.40GHz CPU with 8GB of RAM using Matlab R2012a. We optimize the hyperparameters using the conjugate gradients code in the GPML package (Rasmussen & Nickisch, 2010). We have also implemented a parallel version for computation of the marginal likelihood and its derivatives, using the freely available multicore package3. |
| Experiment Setup | Yes | The initial inducing locations are randomly selected from the inputs and the hyperparameters are initialized based on the scale of the input features. We optimize the hyperparameters using the conjugate gradients code in the GPML package (Rasmussen & Nickisch, 2010) and limit the maximum number of function evaluations to 1000. |