Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models
Authors: Yongxian Wei, Zixuan Hu, Li Shen, Zhenyi Wang, Yu Li, Chun Yuan, Dacheng Tao
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments showcase the superiority of our approach in multiple benchmarks, effectively tackling the model heterogeneity in challenging multi-domain and multi-architecture scenarios. |
| Researcher Affiliation | Academia | 1Tsinghua Shenzhen International Graduate School, Tsinghua University, China 2College of Computing & Data Science, Nanyang Technological University, Singapore 3School of Cyber Science and Technology, Sun Yatsen University, China 4University of Maryland, College Park, USA 5Department of Computer Science and Engineering, The Chinese University of Hong Kong, China. |
| Pseudocode | Yes | Algorithm 1: Task Groupings Regularization |
| Open Source Code | No | The paper does not provide any statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We conduct experiments on two widely-used DFML benchmark datasets, and one fine-grained dataset, including mini Image Net (Ravi & Larochelle, 2017), CIFAR-FS (Bertinetto et al., 2019) and CUB (Wah et al., 2011). |
| Dataset Splits | Yes | Following standard splits, we split each dataset into the meta-training, meta-validating and meta-testing subsets with disjoint label spaces. |
| Hardware Specification | No | The paper does not specify any particular hardware components like CPU or GPU models used for the experiments. It implicitly refers to computational aspects but without specific hardware details. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and 'MAML' and implicitly uses libraries like PyTorch (indicated by 'https://pytorch.org/vision/stable/models/generated/torchvision.models.resnet34' in a footnote). However, it does not provide specific version numbers for any of these software dependencies, such as Python, PyTorch, or CUDA versions. |
| Experiment Setup | Yes | For hyperparameters, we configure the number of task groups c to 5. The step size β for the displacement is set to 0.001, and the size of minibatches m is set to 4. The memory bank B is limited to store 20 tasks. We report the average accuracy over 600 meta-testing tasks. |