Generalized Zero-Shot Learning with Deep Calibration Network

Authors: Shichen Liu, Mingsheng Long, Jianmin Wang, Michael I. Jordan

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform extensive evaluation with state of the art methods for zero-shot and generalized zero-shot learning on four benchmark datasets, which will validate the efficacy of the proposed DCN approach.
Researcher Affiliation Academia School of Software, Tsinghua University, China KLiss, MOE; BNRist; Research Center for Big Data, Tsinghua University, China University of California, Berkeley, Berkeley, USA liushichen95@gmail.com {mingsheng, jimwang}@tsinghua.edu.cn jordan@berkeley.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions implementing the approach using PyTorch, with a footnote pointing to https://pytorch.org, but does not provide specific access to their own source code for the described methodology.
Open Datasets Yes Animals with Attributes (Aw A) [32] is a widely-used dataset for coarse-grained zero-shot learning... Caltech-UCSD-Birds-200-2011 (CUB) [50] is a fine-grained dataset... SUN Attribute (SUN) [39] is a fine-grained dataset... Attribute Pascal and Yahoo (a PY) [13] is a small-scale dataset...
Dataset Splits Yes SUN Attribute (SUN) [39] is a fine-grained dataset, medium-scale in the number of images, containing 14,340 images from 717 types of scenes annotated with 102 attributes. We adopt the standard split of [32], containing 645 source classes (in which 65 classes are used for validation) and 72 target classes.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No Our end-to-end trainable approach is implemented using Py Torch1. (footnote 1 refers to https://pytorch.org) - No specific version numbers for software dependencies are provided.
Experiment Setup Yes We use stochastic gradient descent with 0.9 momentum and a mini-batch size of 64. We cross-validate the learning rate in [10 4, 10 1], the temperature τ [0.1, 10], and the entropy-penalty parameter λ [10 3, 10 1].