Label Efficient Learning of Transferable Representations acrosss Domains and Tasks

Authors: Zelun Luo, Yuliang Zou, Judy Hoffman, Li F. Fei-Fei

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method shows compelling results on novel classes within a new domain even when only a few labeled examples per class are available, outperforming the prevalent fine-tuning approach. In addition, we demonstrate the effectiveness of our framework on the transfer learning task from image object recognition to video action recognition.
Researcher Affiliation Academia Zelun Luo Stanford University zelunluo@stanford.edu Yuliang Zou Virginia Tech ylzou@vt.edu Judy Hoffman University of California, Berkeley jhoffman@eecs.berkeley.edu Li Fei-Fei Stanford University feifeili@cs.stanford.edu
Pseudocode No The paper contains mathematical equations but no structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about open-sourcing the code or a link to a code repository for the described methodology.
Open Datasets Yes We present results transferring from a subset of Google Street View House Numbers (SVHN) [41] containing only digits 0-4 to a subset of MNIST [29] containing only digits 5-9. Secondly, we present results on the challenging setting of adapting from Image Net [6] object-centric images to UCF-101 [57] videos for action recognition.
Dataset Splits Yes We subsample k examples from each class to construct dataset D2 so that we can perform traditional training or episodic (k 1)-shot learning. We experiment with k = 2, 3, 4, 5...we randomly subsample 10 different subsets {Di 2}10 i=1 from the training split of MNIST dataset, and use the remaining data as {Di 3}10 i=1 for each k.
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU models, CPU types) used for the experiments, only mentioning the use of the PyTorch framework.
Software Dependencies No The paper mentions that 'We conduct all the experiments with the Py Torch framework' but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes We use the temperature τ = 2 for source-target semantic transfer and τ = 1 for within target transfer... We use α = 0.1 and β = 0.1 in our objective function. The network is trained with Adam optimizer [25] and with learning rate 10 3.