Learning by Transferring from Unsupervised Universal Sources
Authors: Yu-Xiong Wang, Martial Hebert
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present experimental results evaluating our unsupervised sources (UUS) as well as our HTL approach (MT-SVM) on standard recognition benchmarks, comparing several state-of-the-art methods, and validating across tasks and categories the generality of our sources. |
| Researcher Affiliation | Academia | Yu-Xiong Wang and Martial Hebert Robotics Institute, Carnegie Mellon University {yuxiongw, hebert}@cs.cmu.edu |
| Pseudocode | No | The paper describes algorithms verbally and with mathematical formulations (Eqn. 1, 2, 4, 5, 6) but does not present a formal pseudocode block or algorithm listing with structured steps. |
| Open Source Code | No | The paper does not provide a statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | Here, for purpose of reproducibility, we simply use the ILSVRC 2012 training dataset without access to the label information, leading to N = 1.2M unlabeled images D. ... We use Webcam as the target domain... We view the ILSVRC 2012 training dataset as the source domain... The Office dataset contains 31 classes... SUN-397 dataset (Xiao et al. 2014)... UUSs generated on PASCAL 2007. |
| Dataset Splits | Yes | Subset A: we focus on the 16 common classes between Webcam and ILSVRC as our target categories... 1 labeled training and 10 testing images per category are randomly selected on the Webcam domain, i.e., one-shot transfer and a balanced test set across categories. Therefore, each test split has 160 examples. |
| Hardware Specification | No | The paper mentions using 'convolutional neural network (CNN) features pre-trained on ILSVRC 2012' and extracting a 'd = 4,096-D feature vector fc7', but it does not specify any hardware details like GPU models, CPU models, or memory used for their own model training or experimentation. |
| Software Dependencies | No | The paper mentions tools like 'convolutional neural network (CNN) features', 'SVM', 'elastic net regularization', and 'feature-sign search', but it does not provide specific version numbers for any software or libraries used in their implementation. |
| Experiment Setup | Yes | Using an augmented pseudo-labeled dataset DAUG of C =30 pseudo-classes with K+G=6+50 samples per pseudo-class, we generate S =10 split PBCs. Repeating T =2,000 subsampling in parallel, we have generated J =20K source hypotheses in total. ... For all our experiments, we then fixed α = 10, and tuned γ to minimize the leave-one-out-error. |