Leveraging Sub-class Discimination for Compositional Zero-Shot Learning

Authors: Xiaoming Hu, Zilei Wang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on the challenging benchmark datasets, and the considerable performance improvement over state-of-the-art approaches is achieved, which indicates the effectiveness of our method. Our code is available at https://github.com/hxm97/SCD-CZSL. Experimental results demonstrate that our method outperforms the state-of-the-art methods by a significant margin. Also, the ablation study confirms that each proposed module can improve the model performance.
Researcher Affiliation Academia University of Science and Technology of China, Hefei, China cjdc@mail.ustc.edu.cn, zlwang@ustc.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/hxm97/SCD-CZSL.
Open Datasets Yes We evaluate our method on two benchmark CZSL datasets, i.e., UT-Zappos (Yu and Grauman 2014) and C-GQA (Naeem et al. 2021). Specifically, UT-Zappos is a medium-sized dataset composed of 50025 images of shoes with 16 attribute categories and 12 object categories. Among them, 22998 images are used for training, 3214 for validation, and 2914 for test, respectively.
Dataset Splits Yes In UT-Zappos, ... 22998 images are used for training, 3214 for validation, and 2914 for test, respectively. ... We use the same data split as proposed in (Purushwalkam et al. 2019) and (Mancini et al. 2022). The detailed statistics of these datasets are summarized in Table 1.
Hardware Specification Yes Moreover, we conduct our method with Py Torch (Paszke et al. 2019) on a NVIDIA GTX 2080Ti GPU.
Software Dependencies No The paper mentions "Py Torch (Paszke et al. 2019)" but does not specify a version number for PyTorch or any other relevant software libraries.
Experiment Setup Yes The model is trained for 50 epochs using the Adam (Kingma and Ba 2014) optimizer with learning rate of 1e 4 and weight decay of 5e 5. The temperature parameter τ is set as 0.05, and the weight of alignment loss α is fixed as 1.