Learning to Compose Soft Prompts for Compositional Zero-Shot Learning

Authors: Nihal V. Nayak, Peilin Yu, Stephen Bach

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 EXPERIMENTAL EVALUATION In this section, we describe our experiments with CSP . We compare CSP to CLIP-based baselines in the closed-world and open-world settings of compositional zero-shot learning.
Researcher Affiliation Academia Nihal V. Nayak , Peilin Yu , Stephen H. Bach Department of Computer Science Brown University Providence, RI 02906, USA {nnayak2, pyu12, sbach}@cs.brown.edu
Pseudocode Yes F PSEUDOCODE Figure 6 shows the Torch-like pseudocode for inference with CSP.
Open Source Code Yes The code is available at https://github.com/Bats Research/csp.
Open Datasets Yes We experiment with three attribute-object composition benchmarks: MIT-states (Isola et al., 2015), UT-Zappos (Yu & Grauman, 2014), and C-GQA (Naeem et al., 2021).
Dataset Splits Yes Table 1: Summary statistics of the datasets used in our experiments.
Hardware Specification Yes We use a single NVIDIA RTX 3090 or V100 GPU depending on their availability to train all our models.
Software Dependencies No The paper mentions 'Py Torch' but does not specify a version number or other key software dependencies with their versions.
Experiment Setup Yes We train CSP and Co Op by minimizing the cross entropy loss with the Adam optimizer over the seen split in the dataset for 20 epochs.