Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Variational Mixtures of Gaussian Processes for Classification
Authors: Chen Luo, Shiliang Sun
IJCAI 2017 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments are performed on multiple real-world datasets which show improvements over five widely used methods on predictive performance. The results also indicate that for classification MGPC is significantly better than the regression model with mixtures of GPs, different from the existing consensus that their single model counterparts are comparable. |
| Researcher Affiliation | Academia | Chen Luo, Shiliang Sun Department of Computer Science and Technology, East China Normal University, 3663 North Zhongshan Road, Shanghai 200062, P. R. China EMAIL |
| Pseudocode | No | The paper does not include a pseudocode block or algorithm labeled as such. |
| Open Source Code | No | The paper does not provide any statement about making the source code available, nor does it include a link to a code repository. |
| Open Datasets | Yes | Table 1 shows the information about the used datasets. All of the datasets are available on UCI data repository [Lichman, 2013]. |
| Dataset Splits | Yes | All of the datasets are randomly split into the training, validation and test set by a ratio of 4:3:3. The truncation level T and the initializations for variance parameters of q(fn) are selected using the validation set. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments (e.g., CPU/GPU models, memory, cloud instances). |
| Software Dependencies | No | The paper mentions using Python packages (e.g., scikit-learn for SVM), but does not specify any software names with version numbers for reproducibility (e.g., Python version, specific library versions). |
| Experiment Setup | Yes | The truncation level T and the initializations for variance parameters of q(fn) are selected using the validation set. T is set to range from 2 to 4, and the corresponding size of the support set for each component is set to Ntrain/T. The variance σn are initially set to range in 0.005 [1, 2, 4, 8, 16, 32]. |