A Unified Semantic Embedding: Relating Taxonomies and Attributes

Authors: Sung Ju Hwang, Leonid Sigal

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our method for multiclass categorization performance on two different datasets generated from a public image collection, and also test for knowledge transfer on few-shot learning.
Researcher Affiliation Industry Sung Ju Hwang Disney Research Pittsburgh, PA sungju.hwang@disneyresearch.com Leonid Sigal Disney Research Pittsburgh, PA lsigal@disneyresearch.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to its own source code. It only mentions using code provided by authors of a baseline method (NCM [11]) in a footnote.
Open Datasets Yes We use Animals with Attributes dataset [1], which consists of 30, 475 images of 50 animal classes, with 85 class-level attributes.
Dataset Splits Yes Since there is no fixed training/test split, we use {30,30,30} random split for training/validation/test.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments, only general mentions of features derived from deep convolutional networks.
Software Dependencies No The paper mentions the use of 'De CAF features [18]' but does not provide specific version numbers for any software dependencies used in their own experimental setup.
Experiment Setup Yes For parameters, the projection dimension de = 50 for all our models. For other parameters, we find the optimal value by cross-validation on the validation set. We set µ1 = 1 that balances the main and auxiliary task equally, and search for µ2 for discriminative/generative tradeoff, in the range of {0.01, 0.1, 0.2 . . . , 1, 10}, and set ℓ-2 norm regularization parameter λ = 1. For sparsity parameter γ1, we set it to select on average several (3 or 4) attributes per class, and for disjoint parameter γ2, we use 10γ1, without tuning for performance.