Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Revisit Multimodal Meta-Learning through the Lens of Multi-Task Learning
Authors: Milad Abdollahzadeh, Touba Malekzadeh, Ngai-Man (Man) Cheung
NeurIPS 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed model in both multimodal and unimodal few-shot classification scenarios. |
| Researcher Affiliation | Academia | Milad Abdollahzadeh, Touba Malekzadeh, Ngai-Man Cheung Singapore University of Technology and Design EMAIL |
| Pseudocode | Yes | Algorithm 1: Measuring Transference on a Target Task. |
| Open Source Code | Yes | The code for this project is available at https://miladabd.github.io/KML. |
| Open Datasets | Yes | We combine multiple widely used datasets (Omniglot [47], mini-Imagenet [12], FC100 [48], CUB [49], and Aircraft [50]). |
| Dataset Splits | No | The paper mentions 'meta-training' and 'meta-test' sets and 'support set' and 'query set' within tasks, but does not provide specific percentages or counts for overall training, validation, and test data splits of the datasets used. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running experiments. |
| Software Dependencies | No | The paper mentions using specific meta-learners and modifying existing code but does not provide version numbers for any software dependencies. |
| Experiment Setup | No | The details of the experimental setup can be found in the supplementary. |