Few-shot Classification via Ensemble Learning with Multi-Order Statistics
Authors: Sai Yang, Fan Liu, Delong Chen, Jun Zhou
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that each branch can complement the others and our method can produce a state-of-the-art performance on multiple few-shot classification benchmark datasets. and We conduct extensive experiments to validate the effectiveness of our method on multiple FSC benchmarks. |
| Researcher Affiliation | Academia | 1School of Electrical Engineering, Nantong University, Nantong, China 2College of Computer and Information, Hohai University, Nanjing, China 3Science and Technology on Underwater Vehicle Technology Laboratory, Harbin Engineering University, Harbin, China 4School of Information and Communication Technology, Griffith University, Queensland, Australia |
| Pseudocode | Yes | Algorithm 1: Ensemble Learning with multi-Order Statistics (ELMOS) for FSC |
| Open Source Code | No | The paper does not contain any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | mini Image Net contains 600 images over 100 classes, which are divided into 64, 16 and 20 respectively for base, validation and novel sets. tired Image Net consists of 779, 165 images belonging to 608 classes... CIFAR-FS is derived from CIFAR100... Caltech-UCSD Bird-200-2011(CUB) has a total number of 11,788 images over 200 bird species. |
| Dataset Splits | Yes | mini Image Net contains 600 images over 100 classes, which are divided into 64, 16 and 20 respectively for base, validation and novel sets. and tired Image Net... These categories are partitioned into 20 categories (351 classes), 6 categories (97 classes) and 8 categories (160 classes) respectively for base, validation and novel sets. and CIFAR-FS... The total classes are split into 64, 16 and 20 for base, validation and novel sets. and CUB... These species are divided into 100, 50, and 50 for the base, validation and novel sets, respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments. It only mentions the use of 'Res Net12 architecture'. |
| Software Dependencies | No | The paper describes the model architecture (Res Net12) and optimizer (SGD) but does not list specific software dependencies with version numbers (e.g., deep learning frameworks like PyTorch/TensorFlow, or Python versions). |
| Experiment Setup | Yes | We opted for the SGD optimizer with a momentum of 0.9 and a weight decay of 5e-4. The learning rate was initialized to be 0.025. We trained the network for 130 epochs with a batch size of 32 in all the experiments. For mini Image Net, tired Image Net and CIFAR-FS, the learning rate was reduced by a factor of 0.2 at the 70-th and 100-th epoch. For CUB, the learning rate was reduced by a factor of 0.2 for every 15 epochs after the 75-th epoch. |