Adaptive Poincaré Point to Set Distance for Few-Shot Classification

Authors: Rongkai Ma, Pengfei Fang, Tom Drummond, Mehrtash Harandi1926-1934

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically show that such metric yields robustness in the presence of outliers and achieves a tangible improvement over baseline models. This includes the state-of-the-art results on five popular few-shot classification benchmarks, namely mini Image Net, tiered-Image Net, Caltech-UCSD Birds-200-2011 (CUB), CIFAR-FS, and FC100.
Researcher Affiliation Collaboration Rongkai Ma1, Pengfei Fang2,3*, Tom Drummond4, Mehrtash Harandi1,3 1Monash University 2The Australian National University 3DATA61-CSIRO, Australia 4The University of Melbourne
Pseudocode Yes Algorithm 1: Train network using adaptive Poincar e point to set distance Input: An episodes E, with their associated support set X s = {(Xs ij, ys i )|i = 1, . . . , N, j = 1, . . . , K} and a query sample Xq Output: The optimal parameters for F, fω, fζ, and fφ 1: Map X s and Xq into Poincar e ball 2: Obtain the tangent support set S using Eq. (7) 3: ˆS = fω( S) the refined support set 4: for i in {1, ..., N} do 5: Si = PK j=1 ˆSij/K the set signature 6: Gij = CONCAT( Sij, Si) the hybrid representation 7: ωij = fφ(Gij) the weight 8: Compute point to set distance and set to set distance using Eq. (12) and Eq. (13) 9: end for 10: Optimize the model using Eq. (10)
Open Source Code No The paper does not provide a specific link or explicit statement about releasing the source code for the described methodology.
Open Datasets Yes Datasets In this section, we will empirically evaluate our approach across five standard benchmarks, i.e., mini-Image Net (Ravi and Larochelle 2016), tiered-Image Net (Ren et al. 2018), Caltech-UCSD Birds-200-2011 (CUB) (Wah et al. 2011), CIFAR-FS (Bertinetto et al. 2018) and Fewshot-CIFAR100 (FC100) (Oreshkin, L opez, and Lacoste 2018).
Dataset Splits No The paper mentions following 'standard protocol to formulate few-shot learning (FSL) with episodic training' and that 'Full details of the datasets and implementation are described in the supplementary material', but does not explicitly provide specific train/validation/test dataset split percentages or sample counts in the main text.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup No The paper states 'Full details of the datasets and implementation are described in the supplementary material', implying that specific experimental setup details like hyperparameters are not included in the main text. It mentions '100 epochs' and '5-way 1-shot and 5-way 5-shot settings' but lacks concrete hyperparameters such as learning rate or batch size.