argmax centroid

Authors: Chengyue Gong, Mao Ye, Qiang Liu

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the applicability and effectiveness of our method on a variety of real-world multitask learning applications, including few-shot image classification, personalized dialogue systems and multi-target domain adaptation.
Researcher Affiliation Academia Chengyue Gong Mao Ye Qiang Liu Computer Science Department, The University of Texas at Austin {cygong17,my21,lqiang}@cs.utexas.edu
Pseudocode Yes Algorithm 1 Main Algorithm: Argmax Centroids for Approximating
Open Source Code No The paper does not provide any explicit statement about open-sourcing code or a link to a code repository for the methodology described.
Open Datasets Yes Standard benchmarks of few-shot classification are chosen for experiments. We evaluate all the baselines and our algorithms on two subsets of Image Net, Mini-Image Net and Tiered Image Net (Sun et al., 2019).
Dataset Splits Yes Mini-Image Net contains 64 classes for training, 16 for validation and 20 for test.
Hardware Specification No The paper does not explicitly describe any specific hardware components (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions software components like 'BERT-base' and 'Adam' optimizer but does not provide specific version numbers for these or any other software dependencies needed for replication.
Experiment Setup Yes In all experiments, we set the replacement controller = 1.2 and = 0.5 for Algorithm 1. In the experiments, for few-shot learning based on SIB and IFSL, we set n = 16. For meta training, we use Adam (Kingma & Ba, 2014) with learning rate 10 3, 10 2 for inner and outer loop training, respectively. During the evaluation, for all the models, we used beam search with beam size 4 and length penalty 1.2.