Adaptive Cross-Modal Few-shot Learning

Authors: Chen Xing, Negar Rostamzadeh, Boris Oreshkin, Pedro O. O. Pinheiro

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through a series of experiments, we show that by this adaptive combination of the two modalities, our model outperforms current uni-modality few-shot learning methods and modality-alignment methods by a large margin on all benchmarks and few-shot scenarios tested.
Researcher Affiliation Collaboration Chen Xing College of Computer Science, Nankai University, Tianjin, China Element AI, Montreal, Canada Negar Rostamzadeh Element AI, Montreal, Canada Boris N. Oreshkin Element AI, Montreal, Canada Pedro O. Pinheiro Element AI, Montreal, Canada
Pseudocode Yes Algorithm 1, on supplementary material, shows the pseudocode for calculating the episode loss.
Open Source Code Yes Source code is released at https://github.com/Element AI/am3.
Open Datasets Yes We conduct main experiments with two widely used few-shot learning datasets: mini Image Net [53] and tiered Image Net [39]. We also experiment on CUB-200 [55], a widely used zero-shot learning dataset.
Dataset Splits Yes Few-shot learning models are trained on a labeled dataset Dtrain and tested on Dtest. The class sets are disjoint between Dtrain and Dtest.
Hardware Specification No No specific hardware details (like GPU/CPU models or types) used for running the experiments are mentioned in the paper.
Software Dependencies No The paper mentions using GloVe for word embeddings but does not specify software dependencies with version numbers (e.g., Python, PyTorch versions).
Experiment Setup No The paper states 'For details on network architectures, training and evaluation procedures, see Apprendix D.', but these details, including hyperparameters and training configurations, are not present in the main body of the paper provided.