Boosting Generative Zero-Shot Learning by Synthesizing Diverse Features with Attribute Augmentation

Authors: Xiaojie Zhao, Yuming Shen, Shidong Wang, Haofeng Zhang3454-3462

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed model on four benchmark datasets, observing significant performance improvement against the state-of-the-art.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Nanjing University of Science and Technology, China 2Department of Engineering Science, University of Oxford, UK 3School of Engineering, Newcastle University, UK zhaoxj@njust.edu.cn, yuming.shen@eng.ox.ac.uk, shidong.wang@newcastle.ac.uk, zhanghf@njust.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets Yes We evaluated our method on four data sets, i.e., AWA1 (Lampert, Nickisch, and Harmeling 2009)... CUB (Welinder et al. 2010)... SUN (Patterson et al. 2014)... Attribute Pascal and Yahoo are abbreviated as a PY (Farhadi et al. 2009).
Dataset Splits No The paper states 'Suppose we have S seen classes for training, and U unseen classes that are only used for test,' but does not provide specific train/validation/test dataset splits or their percentages for reproducibility.
Hardware Specification Yes We only have one GTX 1080ti GPU
Software Dependencies No We implement the proposed method on Py Toch. However, specific version numbers for PyTorch or other software dependencies are not provided.
Experiment Setup Yes For RFF-GZSL, we change the batch size. We set batch size to 128 on all four data sets... For CE-GZSL... We also set batch size to 128... In order to prevent over fitting of the generation network, we add L2 regularization to the optimizer of the generation network.