Semantic Compression Embedding for Generative Zero-Shot Learning
Authors: Ziming Hong, Shiming Chen, Guo-Sen Xie, Wenhan Yang, Jian Zhao, Yuanjie Shao, Qinmu Peng, Xinge You
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three benchmark datasets, i.e., CUB, SUN and AWA2, demonstrate the significant performance gains of SC-EGG over current state-of-the-art methods and its baselines. |
| Researcher Affiliation | Academia | Ziming Hong1 , Shiming Chen1 , Guo-Sen Xie2 , Wenhan Yang3 , Jian Zhao4 , Yuanjie Shao1 , Qinmu Peng1 and Xinge You1 1Huazhong University of Science and Technology 2Nanjing University of Science and Technology 3Nanyang Technological University 4Institute of North Electronic Equipment |
| Pseudocode | Yes | Algorithm 1 The algorithm of SC-EGG. |
| Open Source Code | Yes | The code of SC-EGG is available at the online page1. 1https://github.com/HHHZM/SC-EGG |
| Open Datasets | Yes | We evaluate the proposed SC-EGG on three standard ZSL benchmark datasets: Caltech-UCSD-Birds (CUB) [Welinder et al., 2010], SUN Attribute (SUN) [Patterson and Hays, 2012] and Animals with Attributes2 (AWA2) [Xian et al., 2017]. |
| Dataset Splits | Yes | We use the training splits proposed in [Xian et al., 2018]. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. It only mentions using 'Res Net101' which is a model, not hardware. |
| Software Dependencies | No | The paper mentions software like 'Adam as optimizer', 'Res Net101', and 'GloVe', but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We use the Adam as optimizer with lr = 10 4 and batchsize = 64. We use a single layer FC as the final CZSL or GZSL classifier. Hyperparameters λg, λw, λr, λs and λu are respectively set to 1.0, 10.0, 0.01, 0.1 and 0.1. In addition, the setting of other hyperparameters in TFVAEGAN follows [Narayan et al., 2020]. We train SC-EGG for 20 epochs in stage 1, 20 epochs in stage 2, and at most 200 epochs in stage 3. |