Learning in the Wild with Incremental Skeptical Gaussian Processes
Authors: Andrea Bontempelli, Stefano Teso, Fausto Giunchiglia, Andrea Passerini
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on synthetic and real-world data show that, as a result, while the original formulation of skeptical learning produces over-confident models that can fail completely in the wild, ISGP works well at varying levels of noise and as new classes are observed. |
| Researcher Affiliation | Academia | Andrea Bontempelli1 , Stefano Teso1 , Fausto Giunchiglia1,2 and Andrea Passerini1 1University of Trento, Italy 2Jilin University, Changchun, China name.surname@unitn.it |
| Pseudocode | Yes | Algorithm 1 Pseudo-code of ISGP. Y0 is provided as input. All branches are stochastic, see the relevant equations. |
| Open Source Code | Yes | The code and experimental setup can be downloaded from: gitlab.com/abonte/incremental-skeptical-gp. |
| Open Datasets | No | The paper mentions using a synthetic dataset and a 'location prediction task introduced in [Zeni et al., 2019]', but it does not provide a direct link, DOI, or specific repository name for either dataset used in its experiments, nor does it explicitly state the datasets are publicly available with concrete access information for this paper. |
| Dataset Splits | Yes | All results are 10-fold cross validated. |
| Hardware Specification | Yes | The experiments were run on a computer with a 2.2 GHz processor and 16 Gi B of memory. |
| Software Dependencies | No | The paper mentions 'implemented ISGP using Python 3', but it does not specify any libraries or frameworks with version numbers (e.g., PyTorch 1.x, scikit-learn 0.x). |
| Experiment Setup | Yes | All GP learners used a squared exponential kernel with a length scale of 2 and ρ = 10 8, without any optimization. The number of trees of SRF was set to 100. |