Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Scaling (Down) CLIP: A Comprehensive Analysis of Data,Architecture, and Training Strategies
Authors: Zichao Li, Cihang Xie, Ekin Dogus Cubuk
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This paper investigates the performance of the Contrastive Language-Image Pre-training (CLIP) when scaled down to limited computation budgets. We explore CLIP along three dimensions: data, architecture, and training strategies. Our experiments are conducted on a large image-and-language dataset, Web LI (Chen et al., 2022), which contains over 3.4 billion image-text pairs in English. |
| Researcher Affiliation | Collaboration | Zichao Li EMAIL Google Deepmind University of California, Santa Cruz; Cihang Xie EMAIL University of California, Santa Cruz; Ekin Dogus Cubuk EMAIL Google Deepmind. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. It describes methodologies in narrative text. |
| Open Source Code | No | The paper does not provide explicit access to source code for the methodology described. It does not contain a specific repository link, an explicit code release statement, or mention code in supplementary materials. |
| Open Datasets | Yes | Our experiments are conducted on a large image-and-language dataset, Web LI (Chen et al., 2022), which contains over 3.4 billion image-text pairs in English. We conduct our experiments on Web LI Chen et al. (2022), a large image-and-language dataset built from the public web. For the evaluation of retrieval performance on MSCOCO captions, we report the results based on the test set. The findings regarding the zero-shot performance of CLIP models on Image Net are depicted in Figure 2(a). |
| Dataset Splits | Yes | To ensure a fair comparison with prior work, we follow the same evaluation settings as Radford et al. (2021); Zhai et al. (2021). We use the same prompts collected from these works and preprocess the test images by first resizing them and then applying a central crop with a 0.875 aspect ratio to match the target resolution. For the evaluation of retrieval performance on MSCOCO captions, we report the results based on the test set. |
| Hardware Specification | No | The paper mentions setting a 'computation limit' for most experiments but does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions using Adafactor as the optimizer and a SentencePiece tokenizer but does not specify version numbers for these or any other software components. |
| Experiment Setup | Yes | In our study, we adopt the hyper-parameter settings used in a previous work Zhai et al. (2021). We use Adafactor Shazeer & Stern (2018) as the optimizer with β1 and β2 set to 0.9 and 0.999, respectively, and set the batch size of all our models to 16k. To adjust the learning rate, we use a cosine learning scheduler with an initial rate of 0.001, and we set the weight decay to 0.0001. In our base experiments (referred to as just CLIP), we do not apply any data augmentation to images, except for resizing it to 224x224 and normalizing pixel values to the range of -1 to 1. For text processing, we use a Sentence Piece tokenizer with a vocabulary size of 32k. We set the token length to 16 and pad or truncate sentences to that length. |