Evolutionary Generalized Zero-Shot Learning

Authors: Dubing Chen, Chenyi Jiang, Haofeng Zhang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on three popular GZSL benchmark datasets demonstrate that our model can learn from the test data stream while other baselines fail.
Researcher Affiliation Academia School of Artificial Intelligence, Nanjing University of Science and Technology
Pseudocode Yes Algorithm 1 The Proposed EGZSL Method
Open Source Code Yes The codes are available at https://github.com/cdb342/EGZSL.
Open Datasets Yes We evaluate EGZSL methods on three public ZSL benchmarks: 1) Animals with Attributes 2 (AWA2) [Lampert et al., 2013] contains 50 animal species and 85 attribute annotations, accounting for 37,322 samples. 2) Attribute Pascal and Yahoo (APY) [Farhadi et al., 2009] includes 32 classes of 15,339 samples and 64 attributes. 3) Caltech-UCSD Birds200-2011 (CUB) [Wah et al., 2011] consists of 11,788 samples of 200 bird species, annotated by 312 attributes.
Dataset Splits Yes for a given ZSL dataset, the original training set serves as the base set, while the test set is partitioned into various batches in a fixed random order. We split the data into seen and unseen classes according to the common GZSL benchmark procedure in [Xian et al., 2017].
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions 'Py Torch function' and 'Adam optimizer' but does not specify version numbers for any software dependencies.
Experiment Setup Yes We employ the Adam optimizer [Kingma and Ba, 2015] with a learning rate of 5e-5 for the main experiments. We set the (mini) batch size equal to the total number of data in each evolutionary stage. Each stage of data is optimized for one epoch only. We set λ at 1, τ at 0.5, m1 at 0.99, and m2 at 0.9 for the best results.