A Dual Attention Network with Semantic Embedding for Few-Shot Learning

Authors: Shipeng Yan, Songyang Zhang, Xuming He9079-9086

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our model on three few-shot image classification datasets with extensive ablative study, and our approach shows competitive performances over these datasets with fewer parameters.
Researcher Affiliation Academia Shipeng Yan, Songyang Zhang, Xuming He School of Information Science and Technology, Shanghai Tech University {yanshp, zhangsy1, hexm}@shanghaitech.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes For facilitating the future research, code and data split are available: https://github.com/tonysy/STANet-Py Torch
Open Datasets Yes We evaluate our STANet method on the task of few-shot image classification by conducting a set of experiments on three datasets. In addition to two publicly-available datasets, Mini Image Net (Krizhevsky, Sutskever, and Hinton 2012) and Omniglot (Lake, Salakhutdinov, and Tenenbaum 2015), we propose a new few-shot learning benchmark using real-world images from CIFAR100 (Krizhevsky and Hinton 2009), which is referred to as Meta-CIFAR100 dataset.
Dataset Splits Yes We adopted the splits proposed by (Vinyals et al. 2016; Ravi and Larochelle 2017) with 64 classes for training, 16 for validation, 20 for testing in the meta-learning setting.
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments (e.g., specific GPU/CPU models, memory details).
Software Dependencies No The paper mentions PyTorch implicitly through a GitHub link containing "Py Torch", but does not specify its version number or any other software dependencies with version numbers.
Experiment Setup No Details of network architecture and experiments configuration are listed in the supplementary material, not explicitly in the main paper text.