Incremental Few-Shot Learning with Attention Attractor Networks
Authors: Mengye Ren, Renjie Liao, Ethan Fetaya, Richard Zemel
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show empirically that our proposed method can produce state-of-the-art results in incremental few-shot learning on mini-Image Net [36] and tiered-Image Net [29] tasks. |
| Researcher Affiliation | Collaboration | Mengye Ren1,2,3, Renjie Liao1,2,3, Ethan Fetaya1,2, Richard S. Zemel1,2 1University of Toronto, 2Vector Institute, 3Uber ATG |
| Pseudocode | Yes | Algorithm 1 Meta Learning for Incremental Few-Shot Learning |
| Open Source Code | Yes | Code released at: https://github.com/renmengye/inc-few-shot-attractor-public |
| Open Datasets | Yes | We experiment on two few-shot classification datasets, mini-Image Net and tiered-Image Net. Both are subsets of Image Net [30]... mini-Image Net Proposed by [36]... tiered-Image Net Proposed by [29] |
| Dataset Splits | Yes | mini-Image Net Proposed by [36], mini-Image Net contains 100 object classes and 60,000 images. We used the splits proposed by [27], where training, validation, and testing have 64, 16 and 20 classes respectively. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions software components like ResNet, L-BFGS, and ADAM optimizer, but it does not specify exact version numbers for any libraries or frameworks (e.g., PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We use a standard Res Net backbone [11]... We use L-BFGS [43] to solve the inner loop of our models... We use the ADAM [14] optimizer for meta-learning with a learning rate of 1e-3, which decays by a factor of 10 after 4,000 steps, for a total of 8,000 steps. We fix recurrent backpropagation to 20 iterations and ϵ = 0.1. |