Laplacian Regularized Few-Shot Learning
Authors: Imtiaz Ziko, Jose Dolz, Eric Granger, Ismail Ben Ayed
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted comprehensive experiments over five fewshot learning benchmarks. Our Laplacian Shot consistently outperforms state-of-the-art methods by significant margins across different models, settings, and data sets. Furthermore, our transductive inference is very fast, with computational times that are close to inductive inference, and can be used for large-scale few-shot tasks. |
| Researcher Affiliation | Academia | Imtiaz Masud Ziko 1 Jose Dolz 1 Eric Granger 1 Ismail Ben Ayed 1 1 ETS Montreal, Canada. |
| Pseudocode | Yes | Algorithm 1 Proposed Algorithm for Laplacian Shot |
| Open Source Code | Yes | An implementation of our Laplacian Shot is publicly available6. 6https://github.com/imtiazziko/Laplacian Shot |
| Open Datasets | Yes | We used five benchmarks for few-shot classification: mini Image Net, tiered Image Net, CUB, cross-domain CUB (with base training on mini Image Net) and i Nat. The mini Image Net benchmark is a subset of the larger ILSVRC-12 dataset (Russakovsky et al., 2015)... The tiered Image Net benchmark (Ren et al., 2018) is also a subset of ILSVRC-12 dataset... CUB-200-2011 (Wah et al., 2011)... The i Nat benchmark, introduced recently for few-shot classification in (Wertheimer & Hariharan, 2019)... |
| Dataset Splits | Yes | We use the standard split of 64 base, 16 validation and 20 test classes (Ravi & Larochelle, 2017; Wang et al., 2019). The tiered Image Net benchmark... We follow standard splits with 351 base, 97 validation and 160 test classes for the experiments. CUB-200-2011... splits into 100 base, 50 validation and 50 test classes for the experiments. We tuned this parameter using the validation classes by sampling 500 few-shot tasks. |
| Hardware Specification | Yes | We used two 16GB P100 GPUs for network training with base classes. |
| Software Dependencies | No | The paper mentions using 'SGD optimizer' and specific data augmentation procedures, but does not specify version numbers for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow). |
| Experiment Setup | Yes | We trained the network models using the standard cross-entropy loss on the base classes, with a label-smoothing (Szegedy et al., 2016) parameter set to 0.1. ... We used the SGD optimizer to train the models, with mini-batch size set to 256 for all the networks, except for WRN and Dense Net, where we used mini-batch sizes of 128 and 100, respectively. ... We used k = 3 for mini Image Net, CUB and tiered Image Net and k = 10 for i Nat benchmark. Regularization parameter λ is chosen based on the validation class accuracy... For the i Nat experiments, we simply fix λ = 1.0. |