Self-training For Few-shot Transfer Across Extreme Task Differences

Authors: Cheng Perng Phoo, Bharath Hariharan

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 EXPERIMENTS, 5.1 FEW-SHOT TRANSFER ACROSS DRASTICALLY DIFFERENT DOMAINS, 5.1.1 RESULTS
Researcher Affiliation Academia Cheng Perng Phoo, Bharath Hariharan Department of Computer Science Cornell University {cpphoo, bharathh}@cs.cornell.edu
Pseudocode No The paper describes its method using prose and equations, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/cpphoo/STARTUP.
Open Datasets Yes We experiment with the challenging (BSCD-FSL) benchmark introduced in Guo et al. (2020). The base dataset in this benchmark is mini Image Net (Vinyals et al., 2016)... Crop Diseases, Euro SAT, ISIC2018, Chest X
Dataset Splits Yes To pick the suitable starting learning rate, 10% of the unlabeled data and 5% of the labeled data (1% when using Image Net as the base dataset) are set aside as our internal validation set.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or cloud computing specifications used for running experiments.
Software Dependencies No The paper mentions PyTorch and scikit-learn with citations, but does not explicitly state specific version numbers for these software dependencies (e.g., "PyTorch 1.9").
Experiment Setup Yes The student model is trained for 1000 epochs... We use a batch size of 256... We use the SGD with momentum optimizer with momentum 0.9 and weight decay 1e-4. To pick the suitable starting learning rate, 10% of the unlabeled data and 5% of the labeled data... are set aside as our internal validation set. We pick the starting learning rate by training the student with starting learning rate lr {1e-1, 5e-2, 3e-2, 1e-2, 5e-3, 3e-3, 1e-3}...