Scalable Variational Gaussian Processes via Harmonic Kernel Decomposition
Authors: Shengyang Sun, Jiaxin Shi, Andrew Gordon Gordon Wilson, Roger B Grosse
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present empirical evaluations in this section. All results were obtained using NVIDIA Tesla P100 GPUs, except in Sec 6.3 we used NVIDIA Tesla T4. Code is available at https://github.com/ssydasheng/ Harmonic-Kernel-Decomposition. |
| Researcher Affiliation | Collaboration | 1University of Toronto 2Vector Institute 3Microsoft Research New England 4New York University. |
| Pseudocode | No | The paper describes algorithms and methods using mathematical equations and textual explanations, but it does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Code is available at https://github.com/ssydasheng/ Harmonic-Kernel-Decomposition. |
| Open Datasets | Yes | We adopt GPs to fit the ETOPO1 elevation data of the earth (Amante & Eakins, 2009).; Translate-MNIST dataset. The dataset is obtained by translating every MNIST image...; CIFAR-10 classification problem |
| Dataset Splits | Yes | The dataset is randomly split for 72% training, 8% validating, and 20% testing. |
| Hardware Specification | Yes | All results were obtained using NVIDIA Tesla P100 GPUs, except in Sec 6.3 we used NVIDIA Tesla T4. |
| Software Dependencies | No | The paper mentions that 'Code is available at https://github.com/ssydasheng/ Harmonic-Kernel-Decomposition', but it does not specify version numbers for any software dependencies or libraries used. |
| Experiment Setup | Yes | All models are optimized using the Adam optimizer with learning rate 0.01 for 100K iterations. and We optimize all models using the Adam optimizer with learning rate 0.001 for 100K iterations. |