Learning to Learn Dense Gaussian Processes for Few-Shot Learning
Authors: Ze Wang, Zichen Miao, Xiantong Zhen, Qiang Qiu
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on common benchmark datasets for a variety of few-shot learning tasks. Our dense Gaussian processes present significant improvements over vanilla Gaussian processes and comparable or even better performance with state-of-the-art methods. |
| Researcher Affiliation | Collaboration | Purdue University1 University of Amsterdam2 Inception Institute of Artificial Intelligence3 |
| Pseudocode | Yes | Algorithm 1 Learning to learn dense inducing variables. |
| Open Source Code | No | The paper does not provide an explicit statement about open-source code availability or a link to a code repository. |
| Open Datasets | Yes | We perform experiments on the widely used few-shot learning benchmarks including mini Image Net, tiered Image Net, CIFAR-FS , and Caltech-UCSD [42] (CUB). In mini Image Net [41], there are 100 image classes from a subset of Image Net [8], with 600 images for each class. |
| Dataset Splits | Yes | We follow the standard practice [9] to split the training, validation, and testing sets with 64, 16, and 20 classes, respectively. tiered Image Net [28] is a large subset of Image Net that contains 608 classes with 1,300 samples in each class. Specifically, in tiered Image Net, there are 351 classes from 20 categories for training, 97 classes from 6 categories for validation, and 160 classes from 8 different categories for testing. |
| Hardware Specification | No | The paper does not specify the hardware used for running the experiments (e.g., GPU models, CPU types). |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used. |
| Experiment Setup | Yes | To maintain a balance between efficiency and performance, we choose m = 256 throughout all the experiments... To keep the balance between accuracy and efficiency, we choose = 0.01 and I = 5 throughout all the experiments. |