Asymmetric Distribution Measure for Few-shot Learning

Authors: Wenbin Li, Lei Wang, Jing Huo, Yinghuan Shi, Yang Gao, Jiebo Luo

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experimental Result All experiments are conducted on both mini Image Net [Vinyals et al., 2016] and tiered Image Net [Ren et al., 2018].
Researcher Affiliation Academia 1National Key Laboratory for Novel Software Technology, Nanjing University, China 2University of Wollongong, Australia 3University of Rochester, USA
Pseudocode No The paper describes its methodology in detail using text and mathematical equations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes The source code can be downloaded from https://github.com/Wenbin Lee/ADM.git.
Open Datasets Yes All experiments are conducted on both mini Image Net [Vinyals et al., 2016] and tiered Image Net [Ren et al., 2018].
Dataset Splits Yes We use the same splits as in [Ravi and Larochelle, 2017], which takes 64, 16 and 20 classes for training, validation and test, respectively. [...] On this dataset, we strictly follow the splits used in [Ren et al., 2018], which takes 351, 97 and 160 classes for training, validation and test, respectively.
Hardware Specification No The paper describes the network architecture and training details but does not provide specific hardware details such as GPU or CPU models used for experiments.
Software Dependencies No The paper mentions the use of Adam algorithm and a Conv-64F network architecture but does not specify versions for software dependencies like programming languages or libraries (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes Both 5-way 1-shot and 5-way 5-shot classification tasks are conducted to evaluate our methods. We use 15 query images per class in each single task (75 query images in total) in both training and test stages. [...] In the training stage, we use the Adam algorithm [Kingma and Ba, 2014] to train all the models for 40 epoches. In each epoch, we randomly construct 10000 episodes (tasks). Also, the initial learning rate is set as 1e-3 and multiplied by 0.5 every 10 epoches.