Better Generalized Few-Shot Learning Even without Base Data
Authors: Seong-Woong Kim, Dong-Wan Choi
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results somewhat surprisingly show that the proposed zero-base GFSL method that does not utilize any base samples even outperforms the existing GFSL methods that make the best use of base data. |
| Researcher Affiliation | Academia | Seong-Woong Kim, Dong-Wan Choi* Department of Computer Science and Engineering, Inha University, South Korea wauri6@gmail.com, dchoi@inha.ac.kr |
| Pseudocode | No | The paper describes its methods in prose and mathematical formulas, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our implementation is available at: https://github.com/bigdata-inha/Zero-Base-GFSL. |
| Open Datasets | Yes | We compare our method with the state-of-the-art (SOTA) GFSL methods using two datasets, mini-Image Net (Vinyals et al. 2016) and tiered-Image Net (Ren et al. 2018), which are most widely used in the literature of GFSL. |
| Dataset Splits | Yes | The mini-Image Net contains 100 classes and 60,000 sample images from Image Net (Russakovsky et al. 2015), which are then randomly split into 64 training classes, 16 validation classes, and 20 testing classes, proposed by (Ravi and Larochelle 2017). |
| Hardware Specification | Yes | We implement all the methods in Py Torch, and train each model on a machine with NVIDIA A100. |
| Software Dependencies | No | The paper mentions using "Py Torch" but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | No | The paper states "Full details of our settings are covered in Appendix." which implies that the specific experimental setup details like hyperparameters are not present in the main text. |