TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning

Authors: Sung Whan Yoon, Jun Seo, Jaekyun Moon

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental When tested on the Omniglot, mini Image Net and tiered Image Net datasets, we obtain state of the art classification accuracies under various few-shot scenarios.
Researcher Affiliation Academia 1School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea.
Pseudocode Yes Algorithm 1 Episodic learning is done by NE episodes.
Open Source Code Yes Codes are available on https://github.com/istarjun/Tap Net
Open Datasets Yes Omniglot (Lake et al., 2015); mini Image Net (Vinyals et al., 2016); tiered Image Net (Ren et al., 2018)
Dataset Splits Yes For our experiment, we have used 84 84 downsized color images with a split of 64 training classes, 16 validation classes and 20 test classes. [...] These categories are split into 20 training, 6 validation and 8 test categories, and the training, validation and test sets contain 351, 97 and 160 classes, respectively.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using the 'Adam optimizer' but does not specify version numbers for any software dependencies.
Experiment Setup Yes The Adam optimizer (Kingma & Ba, 2014) with an optimized learning-rate decay is employed. For all experiments, the initial learning rate is 10 3. In the 20-way Omniglot experiment, the learning rate is reduced by half at every 4.0 104 episodes, but for 5-way mini Image Net and 5-way tiered Image Net classification, we cut the learning rate by a factor of 10 at every 2.0 104 and 4.0 104 episodes, respectively, for 1-shot experiments and every 4.0 104 and 3.0 104 episodes, respectively, for 5-shot experiments.