Adaptive Distribution Calibration for Few-Shot Learning with Hierarchical Optimal Transport

Authors: Dandan Guo, Long Tian, He Zhao, Mingyuan Zhou, Hongyuan Zha

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on standard benchmarks demonstrate that our proposed plug-andplay model outperforms competing approaches and owns desired cross-domain generalization ability, proving the effectiveness of the learned adaptive weights. 5 Experiments
Researcher Affiliation Academia Dandan Guo 1,2, Long Tian3, He Zhao 4, Mingyuan Zhou5, Hongyuan Zha1,6 1School of Data Science, The Chinese University of Hong Kong, Shenzhen 2 Institute of Robotics and Intelligent Manufacturing 3Xidian University 4CSIRO s Data61 5The University of Texas at Austin 6 Shenzhen Institute of Artificial Intelligence and Robotics for Society
Pseudocode Yes We describe our proposed framework in Algorithm 1. Algorithm 1 Workflow about our adaptive distribution calibration on few-shot learning.
Open Source Code Yes 1If you are interested in our work, please see our code, which is available at https://github.com/DandanGuo1993/Adaptive-Distribution-Calibration-for-Few-Shot-Learning-with-Hierarchical-Optimal-Transport.
Open Datasets Yes We evaluate our proposed method on several standard few-shot classification datasets with different levels of granularity, including mini Image Net [33], tiered Image Net [34], CUB [35], and CIFAR-FS [36].
Dataset Splits Yes Following the previous work [33], we split the dataset into 64 base classes, 16 validation classes, and 20 novel classes. We adopt 351, 97, and 160 classes for training, validation, and test, respectively. we split the dataset into 100 base classes, 50 validation classes, and 50 novel classes. The classes are randomly split into 64, 16, and 20 for meta-training, meta-validation, and meta-testing, respectively.
Hardware Specification No The main body of the paper does not specify details about the hardware used, such as specific GPU or CPU models. While the reviewer checklist indicates hardware details are in Appendix A, this appendix is not provided.
Software Dependencies No The paper mentions using 'the LR implementation of scikit-learn [40]', but does not provide a specific version number for scikit-learn or any other software dependency.
Experiment Setup Yes Specifically, the number of generated features is 750, the ϵ in Sinkhorn algorithm is 0.01, the α in (10) is 0.21; and λ is 0.5, 1, 1 and 0.8 for mini Image Net, tiered Image Net, CUB, and CIFAR-FS, respectively, selected by a grid search using the validation set. The maximum iteration number in Sinkhorn algorithm is set as 200.