GP-Tree: A Gaussian Process Classifier for Few-Shot Incremental Learning
Authors: Idan Achituve, Aviv Navon, Yochai Yemini, Gal Chechik, Ethan Fetaya
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our method against other Gaussian process training baselines, and we show how our general GP approach achieves improved accuracy on standard incremental few-shot learning benchmarks. |
| Researcher Affiliation | Collaboration | 1Bar-Ilan University, Israel 2Nvidia, Israel. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is publicly available at https://github.com/Idan Achituve/GP-Tree. |
| Open Datasets | Yes | We evaluated GP-Tree in this setup on the finegrained classification dataset, Caltech-UCSD Birds (CUB) 200-2011 (Welinder et al., 2010). For evaluating GP-Tree with DKL we used the CIFAR-10 and CIFAR-100 datasets. mini-Imagenet, a 100-class subset of the Imagenet (Deng et al., 2009) dataset |
| Dataset Splits | Yes | Since the data splits made public by (Tao et al., 2020) did not include a validation set, we pre-allocate a small portion of the base classes dataset for hyper-parameter tuning of GP-Tree, SDC (Yu et al., 2020), and PODNet (Douillard et al., 2020) on both datasets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments, only general computing concepts. |
| Software Dependencies | No | The paper mentions PyTorch but does not provide specific version numbers for any software dependencies required to replicate the experiments. |
| Experiment Setup | Yes | We used Res Net-18 (He et al., 2016) as the backbone NN with an embedding layer of size 1024 and trained the models for 200 epochs. |