The Balanced-Pairwise-Affinities Feature Transform

Authors: Daniel Shalam, Simon Korman

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, the transform is highly effective and flexible in its use and consistently improves networks it is inserted into, in a variety of tasks and training schemes. We demonstrate state-of-the-art results in few-shot classification, unsupervised image clustering and person re-identification.
Researcher Affiliation Academia 1 Department of Computer Science, University of Haifa, Israel.
Pseudocode Yes Algorithm 1 BPA transform on a set of n features.
Open Source Code Yes Code is available at github.com/Daniel Shalam/BPA .
Open Datasets Yes Mini Imagenet (Vinyals et al., 2016) and CIFAR-FS (Bertinetto et al., 2019), with detailed results in Tables 2 and 3 respectively. We evaluate the performance of the proposed BPA, applying it to a variety of FSC methods including the recent state-of-the-art (PTMap (Hu et al., 2020), Sill Net (Zhang et al., 2021), PTMap-SF (Chen & Wang, 2021) and PMF (Hu et al., 2022)) as well as to conventional methods like the popular Proto Net (Snell et al., 2017). ... We experiment on 3 standard datasets, STL-10 (Coates et al., 2011), CIFAR-10 and CIFAR-100-20 (Krizhevsky & Hinton, 2009), ... and tested on the large-scale Re ID benchmarks CUHK03 (Li et al., 2014) (both detected and labeled ) as well as the Market-1501 (Zheng et al., 2015) set
Dataset Splits Yes We measured accuracy on the validation set of Mini Imagenet (Vinyals et al., 2016), using Proto Net-BPAp (which is the non-fine-tuned drop-in version of BPA within Proto Net (Snell et al., 2017)).
Hardware Specification No The paper does not specify any particular hardware components such as GPU models (e.g., 'NVIDIA A100'), CPU models, or cloud computing instance types used for running the experiments.
Software Dependencies No The paper mentions 'Py Torch-style pseudocode' (Algorithm 1) which implies the use of PyTorch, but it does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes BPA has two hyper-parameters that were chosen through cross-validation and kept fixed for each application over all datasets. The number of Sinkhorn iterations for computing the optimal transport plan was fixed to 5 and entropy regularization parameter λ (Eq. (3.1)) was set to 0.1 for UIC and FSC and to 0.25 for Re ID.