Feature Distribution Fitting with Direction-Driven Weighting for Few-Shot Images Classification

Authors: Xin Wei, Wei Du, Huan Wan, Weidong Min

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method outperforms the current state-of-the-art performance by an average of 3% for 1-shot on standard few-shot learning benchmarks like mini Image Net, CIFAR-FS, and CUB. The excellent performance and compelling visualization show that our method can more accurately estimate the ground-truth distributions.
Researcher Affiliation Academia Xin Wei1, Wei Du1, Huan Wan2, Weidong Min3, 4* 1 School of Software, Nanchang University 2 School of Computer and Information Engineering, Jiangxi Normal University 3 School of Mathematics and Computer Science, Institute of Metaverse, Nanchang University 4 Jiangxi Key Laboratory of Smart City, Nanchang University {xinwei, minweidong}@ncu.edu.cn, duwei@email.ncu.edu.cn, huanwan@jxnu.edu.cn
Pseudocode Yes The algorithm of DDWM is shown in Algorithm 1. Algorithm 1: Training procedure for an M-way-K-shot task
Open Source Code No The paper does not provide any specific links or explicit statements about the release of its source code.
Open Datasets Yes The experiments are conducted on three widely used few-shot learning benchmarks, including mini Image Net (Vinyals et al. 2016), CUB (Wah et al. 2011), and CIFARFS (Bertinetto et al. 2019).
Dataset Splits Yes mini Image Net is divided into 64 base classes, 16 validation classes, and 20 novel classes in all experiments. CUB is split into 100 base classes, 50 validation classes, and 50 novel classes. CIFAR-FS is created by randomly splitting 100 classes of CIFAR-100 (Krizhevsky and Hinton 2009) into 64 base classes, 16 validation classes, and 20 novel classes. The hyperparameters are tuned on the validation sets...
Hardware Specification Yes All experiments are conducted with the configuration of Nvidia Quadro RTX 5000 (16GB), RAM 64GB, Ubuntu 20.04 and torch 1.7.1.
Software Dependencies Yes All experiments are conducted with the configuration of Nvidia Quadro RTX 5000 (16GB), RAM 64GB, Ubuntu 20.04 and torch 1.7.1.
Experiment Setup Yes The hyperparameters of our model contain the power of Tukey s transformation β, the weight controller γ, the number of matched base classes k, the compensation for the within-class variation α, and the number of generated features for each support set class N. Table 3 shows the setting of hyperparameters in detail.