Few-Shot Learning Through an Information Retrieval Lens
Authors: Eleni Triantafillou, Richard Zemel, Raquel Urtasun
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our method achieves impressive results on the standard few-shot classification benchmarks while is also capable of few-shot retrieval. |
| Researcher Affiliation | Collaboration | Eleni Triantafillou University of Toronto Vector Institute Richard Zemel University of Toronto Vector Institute Raquel Urtasun University of Toronto Vector Institute Uber ATG |
| Pseudocode | Yes | Algorithm 1 Few-Shot Learning by Optimizing m AP |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | Omniglot dataset [25] and mini-Image Net refers to a subset of the ILSVRC-12 dataset [26] |
| Dataset Splits | Yes | 100 classes out of which 64 are used for training, 16 for validation and 20 for testing. We train our models on the training set and use the validation set for monitoring performance. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper mentions optimizers (ADAM) and techniques (batch normalization) but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | Yes | Both m AP-SSVM and m AP-DLM are trained with α = 10, and for m AP-DLM the positive update was used. We used |B| = 128 and N = 16 for our models and the siamese. |