EigenGP: Gaussian Process Models with Adaptive Eigenfunctions
Authors: Hao Peng, Yuan Qi
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results demonstrate improved predictive performance of Eigen GP over alternative sparse GP methods as well as relevance vector machines. |
| Researcher Affiliation | Academia | Hao Peng Department of Computer Science Purdue University West Lafayette, IN 47907, USA pengh@purdue.edu Yuan Qi Departments of Computer Science and Statistics Purdue University West Lafayette, IN 47907, USA alanqi@cs.purdue.edu |
| Pseudocode | No | The paper describes the model and optimization process using mathematical equations and textual descriptions, but it does not include structured pseudocode or an algorithm block. |
| Open Source Code | Yes | The implementation is available at: https://github.com/haopeng/EigenGP |
| Open Datasets | Yes | The first dataset is California Housing [Pace and Barry, 1997]. ... The second dataset is Physicochemical Properties of Protein Tertiary Structures (PPPTS) which can be obtained from Lichman [2013]. ... The third dataset is Pole Telecomm that was used in L azaro-Gredilla et al. [2010]. |
| Dataset Splits | Yes | We randomly split the 8 dimensional data into 10,000 training and 10,640 test points. ... We randomly split the 9 dimensional data into 20,000 training and 25,730 test points. ... It contains 10,000 training and 5000 test samples, each of which has 26 features. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for running experiments, such as GPU/CPU models, memory, or cloud instance types. |
| Software Dependencies | No | The paper mentions using 'the code from http://www.gaussianprocess.org/gpml/code/matlab/doc' and software implementations for other methods, but it does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | We set M = 25, 50, 100, 200, 400, and the maximum number of iterations in optimization to be 100 for all methods. On large real data, we used the values of η, a0, and σ2 learned from the full GP on a subset that was 1/10 of the training data to initialize all the methods except RVMs. |