Zero-Shot Recognition via Direct Classifier Learning with Transferred Samples and Pseudo Labels

Authors: Yuchen Guo, Guiguang Ding, Jungong Han, Yue Gao

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four datasets demonstrate consistent performance gains of our approach over the state-of-the-art two-step ZSR approaches.
Researcher Affiliation Academia Tsinghua National Laboratory for Information Science and Technology (TNList) School of Software, Tsinghua University, Beijing 100084, China Northumbria University, Newcastle, NE1 8ST, UK yuchen.w.guo@gmail.com, {dinggg,gaoyue}@tsinghua.edu.cn,jungong.han@northumbria.ac.uk
Pseudocode Yes Algorithm 1 Optimization algorithm for Eq. (8)
Open Source Code No The paper does not provide any explicit statement about releasing source code for the described methodology or a link to a code repository.
Open Datasets Yes The first one is CIFAR10 (Krizhevsky 2009) which has 10 object classes. The second database is Animal with Attributes (Aw A) (Lampert, Nickisch, and Harmeling 2014) dataset... The third one is a Pascal-a Yahoo (a PY) dataset (Farhadi et al. 2009)... The last one is SUN scene recognition dataset (Patterson and Hays 2012).
Dataset Splits Yes For CIFAR10, in each split, we select 2 classes as the target classes and the other 8 as the source classes, and thus we have C2 10 = 45 different splits. Following the split suggested in (Lampert, Nickisch, and Harmeling 2014), 40 classes with 24, 295 images are adopted as the source classes and 10 classes with 6, 180 images are adopted as the target classes. In the inductive setting, for each target class, we select ms = 500, 500, 200, 200 source samples for CIFAR10, Aw A, a PY, and SUN respectively. In the transductive setting, we further select mt = 500, 200, 50, 10 from unlabeled target samples.
Hardware Specification No The paper does not provide specific hardware details (like CPU/GPU models, memory, or processing units) used for running the experiments.
Software Dependencies No The paper mentions using "quadprog function in Matlab" and "LIBSVM (Chang and Lin 2011)" but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes As introduced above, we select samples for each target class individually. In the inductive setting, for each target class, we select ms = 500, 500, 200, 200 source samples for CIFAR10, Aw A, a PY, and SUN respectively. In the transductive setting, we further select mt = 500, 200, 50, 10 from unlabeled target samples. In addition, to determine the parameter value for β for sample selection, μ and C for training robust SVM, we adopt the class-wise cross-validation strategy (Zhang and Saligrama 2015; Guo et al. 2016) where β and C are chosen from {10 2, 10 1, 1, 10, 102} and μ is from {0, 0.025, 0.05, ..., 0.2}.