Zero-shot recognition with unreliable attributes

Authors: Dinesh Jayaraman, Kristen Grauman

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On three datasets, we demonstrate the benefit for visual category learning with zero or few training examples, a critical domain for rare categories or categories defined on the fly.We demonstrate the idea on three large datasets of object and scene categories, and show its clear advantages over status quo models.4 ExperimentsTable 1 shows the results.
Researcher Affiliation Academia Dinesh Jayaraman University of Texas at Austin Austin, TX 78701 dineshj@cs.utexas.edu Kristen Grauman University of Texas at Austin Austin, TX 78701 grauman@cs.utexas.edu
Pseudocode Yes For full pseudocode and a schematic illustration of our method, please see supp.
Open Source Code No The paper does not provide a direct link or explicit statement about the availability of its own source code. It only mentions using code from other authors for a baseline: "We use the authors code: http://attributes.kyb.tuebingen.mpg.de/"
Open Datasets Yes We use three datasets: (1) Animals with Attributes (Aw A) [8] (M = 85 attributes, K = 10 unseen classes, 30,475 total images), (2) a Pascal/a Yahoo objects (a PY) [3] (M = 65, K = 12, 15,339 images) (3) SUN scene attributes (SUN) [20] (M = 102, K = 10, 14,340 images).
Dataset Splits Yes Our method reserves 20% of the attribute-labeled images as ROC validation data, then pools it with the remaining 80% to train the final attribute classifiers.We use the unseen class splits specified in [9] for Aw A and a PY, and randomly select the 10 unseen classes for SUN (see supp.).
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper mentions training SVMs and using a random forest algorithm, but it does not specify any software libraries or frameworks with their version numbers (e.g., Python, PyTorch, scikit-learn versions).
Experiment Setup Yes Following [8], we train attribute SVMs with combined χ2-kernels, one kernel per feature channel, and set C = 10.tree depths of (Aw A-9, a PY-6, SUN-8), and generate (#m, #t) tests per node (Aw A-(10,7), a PY-(8,2), SUN-(4,5)).We learn J = 100 trees per forest.