Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Contextual Squeeze-and-Excitation for Efficient Few-Shot Image Classification
Authors: Massimiliano Patacchiola, John Bronskill, Aliaksandra Shysheya, Katja Hofmann, Sebastian Nowozin, Richard Turner
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we report on experiments on VTAB+MD (Dumoulin et al., 2021) and ORBIT (Massiceti et al., 2021). |
| Researcher Affiliation | Collaboration | Massimiliano Patacchiola University of Cambridge EMAIL John Bronskill University of Cambridge EMAIL Aliaksandra Shysheya University of Cambridge EMAIL Katja Hofmann Microsoft Research EMAIL Sebastian Nowozin EMAIL Richard E. Turner University of Cambridge EMAIL |
| Pseudocode | Yes | The pseudo-code for train and test is provided in Appendix B. |
| Open Source Code | Yes | The code is released with an open-source license 1. 1https://github.com/mpatacchiola/contextual-squeeze-and-excitation |
| Open Datasets | Yes | In this section we report on experiments on VTAB+MD (Dumoulin et al., 2021) and ORBIT (Massiceti et al., 2021). |
| Dataset Splits | Yes | MD test results are averaged over 1200 tasks per-dataset (confidence intervals in appendix). We did not use data augmentation. |
| Hardware Specification | Yes | We used three workstations (CPU 6 cores, 110GB of RAM, and a Tesla V100 GPU) |
| Software Dependencies | No | The paper mentions 'Efficient Net B0 from the official Torchvision repository' and 'Adam optimizer' but does not specify version numbers for general software dependencies like Python or PyTorch. |
| Experiment Setup | Yes | We used the meta-training protocol of Bronskill et al. (2021) (10K training tasks, updates every 16 tasks), the Adam optimizer with a linearly-decayed learning rate in [10 3, 10 5] for both the Ca SE and linear-head. The head is updated 500 times using a random mini-batch of size 128. |