Contextual Squeeze-and-Excitation for Efficient Few-Shot Image Classification

Authors: Massimiliano Patacchiola, John Bronskill, Aliaksandra Shysheya, Katja Hofmann, Sebastian Nowozin, Richard Turner

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we report on experiments on VTAB+MD (Dumoulin et al., 2021) and ORBIT (Massiceti et al., 2021).
Researcher Affiliation Collaboration Massimiliano Patacchiola University of Cambridge mp2008@cam.ac.uk John Bronskill University of Cambridge jfb54@cam.ac.uk Aliaksandra Shysheya University of Cambridge as2975@cam.ac.uk Katja Hofmann Microsoft Research kahofman@microsoft.com Sebastian Nowozin nowozin@gmail.com Richard E. Turner University of Cambridge ret26@cam.ac.uk
Pseudocode Yes The pseudo-code for train and test is provided in Appendix B.
Open Source Code Yes The code is released with an open-source license 1. 1https://github.com/mpatacchiola/contextual-squeeze-and-excitation
Open Datasets Yes In this section we report on experiments on VTAB+MD (Dumoulin et al., 2021) and ORBIT (Massiceti et al., 2021).
Dataset Splits Yes MD test results are averaged over 1200 tasks per-dataset (confidence intervals in appendix). We did not use data augmentation.
Hardware Specification Yes We used three workstations (CPU 6 cores, 110GB of RAM, and a Tesla V100 GPU)
Software Dependencies No The paper mentions 'Efficient Net B0 from the official Torchvision repository' and 'Adam optimizer' but does not specify version numbers for general software dependencies like Python or PyTorch.
Experiment Setup Yes We used the meta-training protocol of Bronskill et al. (2021) (10K training tasks, updates every 16 tasks), the Adam optimizer with a linearly-decayed learning rate in [10 3, 10 5] for both the Ca SE and linear-head. The head is updated 500 times using a random mini-batch of size 128.