Revealing the Proximate Long-Tail Distribution in Compositional Zero-Shot Learning

Authors: Chenyi Jiang, Haofeng Zhang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that our approach elevates the model s performance to the state-of-the-art level, without introducing additional parameters.
Researcher Affiliation Academia School of Computer Science and Engineering, Nanjing University of Science and Technology, China {jiangchenyi, zhanghf}@njust.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code or explicitly state its release.
Open Datasets Yes MIT-States (Isola, Lim, and Adelson 2015), UT-Zappos (Yu and Grauman 2014), and C-GQA (Naeem et al. 2021).
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for train/validation/test sets. It mentions 'seen' and 'unseen' compositions but not explicit validation splits.
Hardware Specification Yes The overall model is trained using the Adam optimizer (Kingma and Ba 2014) on NVIDIA GTX 2080Ti GPU
Software Dependencies No The paper mentions 'Py Torch (Paszke et al. 2019)' but does not provide a specific version number for PyTorch or any other software dependency.
Experiment Setup Yes We set the learning rate as 5 10 4 and the batchsize as 128. We train the Cs, Co and Cy with an early-stopping strategy, it needs about 400 epochs on MITStates, 300 epochs on UT-Zappos and 400 epochs on CGQA. For hyper-parameters, we set τ as 0.1, 0.1, 0.01, η as 1.0, 1.0, 1.0 and λ as 50, 10, 100 for MIT-States, UTZappos, and C-GQA, respectively.