Revisit Multimodal Meta-Learning through the Lens of Multi-Task Learning
Authors: Milad Abdollahzadeh, Touba Malekzadeh, Ngai-Man (Man) Cheung
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed model in both multimodal and unimodal few-shot classification scenarios. |
| Researcher Affiliation | Academia | Milad Abdollahzadeh, Touba Malekzadeh, Ngai-Man Cheung Singapore University of Technology and Design {milad_abdollahzadeh, touba_malekzadeh, ngaiman_cheung}@sutd.edu.sg |
| Pseudocode | Yes | Algorithm 1: Measuring Transference on a Target Task. |
| Open Source Code | Yes | The code for this project is available at https://miladabd.github.io/KML. |
| Open Datasets | Yes | We combine multiple widely used datasets (Omniglot [47], mini-Imagenet [12], FC100 [48], CUB [49], and Aircraft [50]). |
| Dataset Splits | No | The paper mentions 'meta-training' and 'meta-test' sets and 'support set' and 'query set' within tasks, but does not provide specific percentages or counts for overall training, validation, and test data splits of the datasets used. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running experiments. |
| Software Dependencies | No | The paper mentions using specific meta-learners and modifying existing code but does not provide version numbers for any software dependencies. |
| Experiment Setup | No | The details of the experimental setup can be found in the supplementary. |