Zero-Shot Logit Adjustment
Authors: Dubing Chen, Yuming Shen, Haofeng Zhang, Philip H.S. Torr
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments demonstrate that the proposed technique achieves state-of-the-art when combined with the basic generator, and it can improve various generative Zero-Shot Learning frameworks. Our codes are available on https://github.com/cdb342/ IJCAI-2022-ZLA. Figure 1: t-SNE visualization of the synthetic-real unseen data (left) and the real train-test seen data (right) in AWA2. Table 2: GZSL performance comparisons with state-of-the-art methods. |
| Researcher Affiliation | Academia | 1Nanjing University of Science and Technology 2University of Oxford |
| Pseudocode | No | The paper describes its methods using mathematical equations and textual explanations, but it does not provide any pseudocode blocks or clearly labeled algorithm figures. |
| Open Source Code | Yes | Our codes are available on https://github.com/cdb342/ IJCAI-2022-ZLA. |
| Open Datasets | Yes | We study GZSL performed in Animals with Attributes 2 (AWA2) [Lampert et al., 2013], Attribute Pascal and Yahoo (APY) [Farhadi et al., 2009], Caltech-UCSD Birds-200-2011 (CUB) [Wah et al., 2011], and SUN Attribute (SUN) [Patterson and Hays, 2012]. |
| Dataset Splits | Yes | We study GZSL performed in Animals with Attributes 2 (AWA2)... Caltech-UCSD Birds-200-2011 (CUB)... and SUN Attribute (SUN)... following the common split (version 2) proposed in [Xian et al., 2017]. |
| Hardware Specification | No | The paper mentions 'due to device limitations' when explaining a batch size adjustment but does not specify any particular GPU or CPU models, or other hardware components used for the experiments. |
| Software Dependencies | No | The paper mentions 'The Adam optimizer is employed' and 'Leaky ReLU' but does not provide specific version numbers for any programming languages, libraries, or frameworks used (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The Adam optimizer is employed with a learning rate of 1 × 10−3, and the batch size is set at 512 for evaluating our design. When plugging into CE-GZSL [Han et al., 2021], we employ a batch size of 256 in SUN and 512 in other datasets instead of the default 4096 (due to device limitations) while maintaining all other settings in the published paper. |