Meta-Learning Universal Priors Using Non-Injective Change of Variables

Authors: Yilang Zhang, Alireza Sadeghi, Georgios Giannakis

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments conducted on three few-shot learning datasets validate the superiority of data-driven priors over the prespecified ones, showcasing its pronounced effectiveness when dealing with extremely limited data resources. In this section, we test and showcase the empirical superiority of Meta NCo V on both synthetic and real datasets.
Researcher Affiliation Academia Yilang Zhang Department of ECE University of Minnesota Minneapolis, MN 55414 zhan7453@umn.edu Alireza Sadeghi Department of ECE University of Minnesota Minneapolis, MN 55414 sadeg012@umn.edu Georgios B. Giannakis Department of ECE University of Minnesota Minneapolis, MN 55414 georgios@umn.edu
Pseudocode Yes Algorithm 1 Meta NCo V algorithm
Open Source Code Yes Codes for reproducing the results are available at https://github.com/zhangyilang/Meta NCo V.
Open Datasets Yes Mini Image Net [55] contains 60, 000 images sampled from the full Image Net (ILSVRC-12) dataset... Tiered Image Net [42] is a larger subset of the Image Net dataset... CUB-200-2011 [57] is an extended version of the Caltech-UCSD Birds(CUB)-200 dataset...
Dataset Splits Yes The dataset is divided into a training subset Dtrn t Dt, and a validation subset Dval t := Dt \ Dtrn t. In the experiments, we adopt the dataset split suggested by [41], where 64, 16 and 20 disjoint classes can be accessed during the training, validation, and testing phases of meta-learning.
Hardware Specification Yes Our codes are run on a server equipped with an Intel Core i7-12700 CPU, and an NVIDIA RTX A5000 GPU.
Software Dependencies No The paper mentions optimizers like SGD with Nesterov momentum and Adam but does not list specific software libraries or their version numbers (e.g., PyTorch, Python, CUDA versions) used for implementation.
Experiment Setup Yes The hyperparameters used for the few-shot classification experiments are the same as those in MAML [10], which are listed in Table 6. To enhance the statbility of the training process, we use SGD with Nesterov momentum instead of Adam as the optimizer for (10a).