Uniform Sampling over Episode Difficulty

Authors: Sébastien Arnold, Guneet Dhillon, Avinash Ravichandran, Stefano Soatto

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we first propose a method to approximate episode sampling distributions based on their difficulty. Building on this method, we perform an extensive analysis and find that sampling uniformly over episode difficulty outperforms other sampling schemes... We demonstrate the efficacy of our method across popular few-shot learning datasets, algorithms, network architectures, and protocols.
Researcher Affiliation Collaboration Sébastien M. R. Arnold1 , Guneet S. Dhillon2 , Avinash Ravichandran3, Stefano Soatto3,4 1University of Southern California, 2University of Oxford, 3Amazon Web Services, 4University of California, Los Angeles
Pseudocode Yes Algorithm 1: Episodic training with Importance Sampling
Open Source Code No The paper mentions using an existing open-source implementation for FEAT (footnote 5: 'Available at: https://github.com/Sha-Lab/FEAT') but does not provide a link or explicit statement for the source code of their own proposed uniform sampling method or its implementation.
Open Datasets Yes We use two standardized image classification datasets, Mini-Image Net [58] and Tiered Image Net [45], both subsets of Image Net [10].
Dataset Splits Yes Mini-Image Net consists of 64 classes for training, 16 for validation, and 20 for testing; we use the class splits introduced by Ravi and Larochelle [43]. Tiered-Image Net contains 608 classes split into 351, 97, and 160 for training, validation, and testing, respectively.
Hardware Specification No The paper does not provide specific details on the hardware used, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions the use of certain algorithms like Proto Net, MAML, and ANIL but does not specify the versions of any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used for implementation.
Experiment Setup Yes We train for 20k iterations with a mini-batch of size 16 and 32 for Mini-Image Net and Tiered-Image Net respectively, and validate every 1k iterations on 1k episodes.