Sparse Variational Inference for Generalized GP Models
Authors: Rishit Sheth, Yuyang Wang, Roni Khardon
ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | An experimental evaluation for count regression, classification, and ordinal regression illustrates the generality and advantages of the new approach. |
| Researcher Affiliation | Collaboration | Rishit Shetha RISHIT.SHETH@TUFTS.EDU Yuyang Wangb WANGYUYANG1028@GMAIL.COM Roni Khardona RONI@CS.TUFTS.EDU a Department of Computer Science, Tufts University, Medford, MA 02155, USA b Amazon, 500 9th Ave N, Seattle, WA, USA |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions using third-party tools like the 'GPML toolbox' and 'min Func software' but does not state that the authors are releasing their own code for the methodology described. |
| Open Datasets | Yes | The dataset ucsdpeds1l (Chan & Vasconcelos, 2012) ... The remaining datasets are available from the UCI Machine Learning Repository (Lichman, 2013). |
| Dataset Splits | Yes | For a given set size, the subset/active set is randomly selected from the data without replacement. After this set is selected, 10fold cross validation is performed with the remaining data. |
| Hardware Specification | No | Some of the experiments in this paper were performed on the Tufts Linux Research Cluster supported by Tufts Technology Services. This statement indicates a computing environment but lacks specific details such as CPU/GPU models, memory, or number of cores. |
| Software Dependencies | No | The paper mentions using the 'GPML toolbox' and 'min Func software' but does not specify their version numbers, which is required for a reproducible description of software dependencies. |
| Experiment Setup | Yes | A Gaussian RBF kernel is used in all cases. A zero-mean function is used in all cases except count regression where a constant mean function is used. ... The hyperparameters are either estimated from the active set or set to default values (σ2 = 1) prior to training using the same procedure across methods. ... Stopping conditions are f(xk) 10 5, f(xk 1) f(xk) 10 9, or k > 500 where f is the objective function being optimized, k represents the iteration number, and x is the current optimization variable. |