Co-Representation Network for Generalized Zero-Shot Learning

Authors: Fei Zhang, Guangming Shi

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on five benchmark datasets including Aw A1 (Animals with Attributes 1), Aw A2 (Animals with Attributes 2), CUB (Caltech UCSD Birds 200), SUN (SUN Scene Recognition) and a PY(Attribute Pascal and Yahoo), following the GZSL settings (Xian et al., 2017) for seen/unseen splits and compare it with other methods including classic CZSL methods and several recent GZSL methods.
Researcher Affiliation Academia 1School of Artificial Intelligence, Xidian University, China. Correspondence to: Guangming Shi <gmshi@xidian.edu.cn>.
Pseudocode No The paper describes the algorithm steps in paragraph text but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it state that the code is available.
Open Datasets Yes We evaluate our approach on five benchmark datasets including Aw A1 (Animals with Attributes 1), Aw A2 (Animals with Attributes 2), CUB (Caltech UCSD Birds 200), SUN (SUN Scene Recognition) and a PY(Attribute Pascal and Yahoo), following the GZSL settings (Xian et al., 2017) for seen/unseen splits and compare it with other methods including classic CZSL methods and several recent GZSL methods.
Dataset Splits Yes For AWA1, CUB and SUN, the hyper-parameters are determined through a train-validation split of seen classes and are used to train the model on complete data. For Aw A2 and a PY, we use the same hyper-parameters as Aw A1 because of the similarity of the three datasets. The adjustment of hyper-parameters in our method is not complicated and they follow some rules: The best value of K is roughly positively correlated with the number of seen classes M. And our experiments show that a slight increase in K will bring some redundant parameters to the network, but has little impact on the results.
Hardware Specification No The paper mentions using features extracted from ResNet-101 but does not provide specific hardware details (like GPU/CPU models, memory, or cloud instances) used for training or running the CRnet experiments.
Software Dependencies No The paper mentions 'Adam optimizer' and 'ResNet-101' features but does not provide specific version numbers for any software, libraries, or programming languages used.
Experiment Setup Yes The specific details and training hyper-parameters of each datasets are summarized in Table 2. ... All models are trained at a learning rate of 10 5 with Adam optimizer until the loss converge.