Continual Compositional Zero-Shot Learning
Authors: Yang Zhang, Songhe Feng, Jiazheng Yuan
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We design the CCZSL evaluation protocol and conduct extensive experiments on widely used benchmarks, demonstrating the superiority of our method compared to the state-of-the-art CZSL methods. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Beijing Jiaotong University, Beijing, China 2Tangshan Research Institute, Beijing Jiaotong University, Beijing, China 3College of Science and Technology, Beijing Open University, Beijing, China {23111124, shfeng}@bjtu.edu.cn, jzyuan@139.com |
| Pseudocode | Yes | Algorithm 1 The optimization procedure of CCZSL. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We conduct experiments on two widely adopted datasets in CZSL, which are UT-Zappos [Yu and Grauman, 2014] and C-GQA [Naeem et al., 2021]. |
| Dataset Splits | Yes | Specifically, We split UT-Zappos into 3 sessions and C-GQA into 6 sessions. Details of splits are present in Table 2, including number of new attributes, new objects, new training compositions, new validation compositions and new testing compositions in each session. |
| Hardware Specification | No | The paper mentions using a ResNet-18 pre-trained on ImageNet but does not specify any hardware used for training or inference (e.g., GPU model, CPU type). |
| Software Dependencies | No | The paper mentions using the Adam optimizer, but it does not specify any software libraries or their version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The number of super-primitives K is set to 4 for UT-Zappos and 20 for C-GQA. The original primitive embeddings are learned from scratch. For all datasets, we train the model using Adam optimizer with a learning rate of 5e-5. The temperature factor is 0.05 for all datasets. |